[jira] [Updated] (PHOENIX-6214) client connect throw NEWER_SCHEMA_FOUND exception after phoenix.schema.isNamespaceMappingEnabled is true

2022-09-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6214:
---
Attachment: (was: 123.png)

> client connect throw NEWER_SCHEMA_FOUND exception after 
> phoenix.schema.isNamespaceMappingEnabled is true
> 
>
> Key: PHOENIX-6214
> URL: https://issues.apache.org/jira/browse/PHOENIX-6214
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
>
> after set phoenix.schema.isNamespaceMappingEnabled is true, input “use 
> "SYSTEM"” by sqlline.py,and quit sqlline. there throw 
> NewerSchemaAlreadyExistsException when get into sqlline again. As shown below:
> org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 
> (42M04): Schema with given name already exists 
> schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: 
> ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at 
> org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111)
>  at 
> org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) at 
> org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6214) client connect throw NEWER_SCHEMA_FOUND exception after phoenix.schema.isNamespaceMappingEnabled is true

2022-09-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6214:
---
Attachment: 123.png

> client connect throw NEWER_SCHEMA_FOUND exception after 
> phoenix.schema.isNamespaceMappingEnabled is true
> 
>
> Key: PHOENIX-6214
> URL: https://issues.apache.org/jira/browse/PHOENIX-6214
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Attachments: 123.png
>
>
> after set phoenix.schema.isNamespaceMappingEnabled is true, input “use 
> "SYSTEM"” by sqlline.py,and quit sqlline. there throw 
> NewerSchemaAlreadyExistsException when get into sqlline again. As shown below:
> org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 
> (42M04): Schema with given name already exists 
> schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: 
> ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at 
> org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111)
>  at 
> org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) at 
> org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6511) Deletes fail in case of failed region split

2022-01-06 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang resolved PHOENIX-6511.

Resolution: Fixed

> Deletes fail in case of failed region split
> ---
>
> Key: PHOENIX-6511
> URL: https://issues.apache.org/jira/browse/PHOENIX-6511
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Critical
> Fix For: 4.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-4296) Dead loop in HBase reverse scan when amount of scan data is greater than SCAN_RESULT_CHUNK_SIZE

2021-12-09 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-4296:
---
Description: 
This problem seems to only occur with reverse scan not forward scan. When 
amount of scan data is greater than SCAN_RESULT_CHUNK_SIZE(default 2999), Class 
ChunkedResultIteratorFactory will multiple calls function getResultIterator. 
But in function getResultIterator it always readjusts startRow, in fact, if in 
reverse scan we should readjust stopRow. For example
{code:java}
if (ScanUtil.isReversed(scan)) {
scan.setStopRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
} else {
scan.setStartRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
}
{code}

  was:
This problem seems to only occur with reverse scan not forward scan. When 
amount of scan data is greater than SCAN_RESULT_CHUNK_SIZE(default 2999), Class 
ChunkedResultIteratorFactory will multiple calls function getResultIterator. 
But in function getResultIterator it always readjusts startRow, in fact, if in 
reverse scan we should readjust stopRow. For example 
{code:java}
if (ScanUtil.isReversed(scan)) {
scan.setStopRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
} else {
scan.setStartRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
}
{code}



> Dead loop in HBase reverse scan when amount of scan data is greater than 
> SCAN_RESULT_CHUNK_SIZE
> ---
>
> Key: PHOENIX-4296
> URL: https://issues.apache.org/jira/browse/PHOENIX-4296
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: rukawakang
>Assignee: Chen Feng
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-4296-4.x-HBase-1.2-v2.patch, 
> PHOENIX-4296-4.x-HBase-1.2-v3.patch, PHOENIX-4296-4.x-HBase-1.2-v4.patch, 
> PHOENIX-4296-4.x-HBase-1.2.patch, PHOENIX-4296.patch
>
>
> This problem seems to only occur with reverse scan not forward scan. When 
> amount of scan data is greater than SCAN_RESULT_CHUNK_SIZE(default 2999), 
> Class ChunkedResultIteratorFactory will multiple calls function 
> getResultIterator. But in function getResultIterator it always readjusts 
> startRow, in fact, if in reverse scan we should readjust stopRow. For example
> {code:java}
> if (ScanUtil.isReversed(scan)) {
> scan.setStopRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
> } else {
> scan.setStartRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6304) Fix TenantSpecificTablesDMLIT failed

2021-01-06 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6304:
--

 Summary: Fix TenantSpecificTablesDMLIT failed
 Key: PHOENIX-6304
 URL: https://issues.apache.org/jira/browse/PHOENIX-6304
 Project: Phoenix
  Issue Type: Test
Affects Versions: 5.0.0
Reporter: Chao Wang
Assignee: Chao Wang


TenantSpecificTablesDMLIT is failing some times.

org.apache.phoenix.thirdparty.com.google.common.util.concurrent.UncheckedExecutionException:
 org.apache.phoenix.exception.PhoenixNonRetryableRuntimeException: 
java.lang.ClassNotFoundException: org.notreal.class
 at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2049)
 at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache.get(LocalCache.java:3849)
 at 
org.apache.phoenix.thirdparty.com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4711)
 at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:261)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:142)
 at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:241)
 at java.sql.DriverManager.getConnection(DriverManager.java:664)
 at java.sql.DriverManager.getConnection(DriverManager.java:270)
 at 
org.apache.phoenix.coprocessor.WhereConstantParser.getConnectionlessConnection(WhereConstantParser.java:106)
 at 
org.apache.phoenix.coprocessor.WhereConstantParser.addViewInfoToPColumnsIfNeeded(W



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6262) Bulk Load have a bug in lowercase tablename

2021-01-06 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6262:
---
Attachment: PHOENIX-6262.master.003.patch

> Bulk Load have a bug in lowercase tablename
> ---
>
> Key: PHOENIX-6262
> URL: https://issues.apache.org/jira/browse/PHOENIX-6262
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: zhengjiewen
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6262.master.001.patch, 
> PHOENIX-6262.master.002.patch, PHOENIX-6262.master.003.patch
>
>
> h1. Bulk Load in lowercase tablename
> {color:#172b4d}when I use phoenix bulk load command to import csv file to 
> phoenix table,{color} there{color:#172b4d} are get error.{color}
> {code:java}
> //代码占位符
> Exception in thread "main" java.lang.IllegalArgumentException: Table 
> "test"."ods_om_om_order_test" not foundException in thread "main" 
> java.lang.IllegalArgumentException: Table "test"."ods_om_om_order_test" not 
> found at 
> org.apache.phoenix.util.SchemaUtil.generateColumnInfo(SchemaUtil.java:956) at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImportColumns(AbstractBulkLoadTool.java:377)
>  at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:211)
>  at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:180)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:313) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:227)
> {code}
> my command is :
> {code:java}
> hadoop jar 
> /opt/cloudera/parcels/CDH/lib/hbase/lib/phoenix-5.0.0-cdh6.2.0-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool -s \"\"test\"\" -t 
> \"\"ods_om_om_order_test\"\" -i 
> /tmp/phoenix/ods_om_om_order_test5/data.csv{code}
> {color:#172b4d}And I found the source code have a bug in 
> *org.apache.phoenix.jdbc.PhoenixDatabaseMetaData#*{color}*getColumns.*
> This method splices the tableName and schemaName into SQL statements to query 
> the System.catalog. but if your tableName or schemaName is lowercase,that 
> would be the *'"test"'* and *'"ods_om_om_order_test"'* so that will can not 
> query the result and then return table not found exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6050) Set properties is invalid

2021-01-05 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Summary: Set properties is invalid  (was: Set properties is invalid in 
client)

> Set properties is invalid
> -
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 5.0.0
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6050.master.001.patch, 
> PHOENIX-6050.master.002.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2021-01-05 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Attachment: (was: PHOENIX-6050-4.13-HBase-1.3.patch)

> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 5.0.0
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6050.master.001.patch, 
> PHOENIX-6050.master.002.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6215) Failed to create local index if I set column type is tinyint with default value

2021-01-05 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang reassigned PHOENIX-6215:
--

Assignee: Chao Wang

> Failed to create local index if I set column type is tinyint with default 
> value 
> 
>
> Key: PHOENIX-6215
> URL: https://issues.apache.org/jira/browse/PHOENIX-6215
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
> Environment: phoenix-5.0.0-cdh6.2.0
>Reporter: 张嘉昊
>Assignee: Chao Wang
>Priority: Major
> Attachments: image-2020-11-05-17-32-00-661.png
>
>
> this is my sql:
> {code:java}
> //代码占位符
> CREATE TABLE TEST4PHOENIX.MYTEST (
> tag_id varchar(200) not null,
> user_id varchar(200) not null ,
> tag_name varchar(200) null,
> is_delete tinyint default 0,
> CONSTRAINT pk PRIMARY KEY (tag_id, user_id) ) SALT_BUCKETS = 10 ;
>  
> upsert into TEST4PHOENIX.MYTEST values('LB_001', '10077110', 'test', 0);
> upsert into TEST4PHOENIX.MYTEST values('LB_002', '10077110', 'test', 1);
> create local index MYTEST_INDEX on TEST4PHOENIX.MYTEST("TAG_NAME", 
> "IS_DELETE");{code}
>  I set column `is_delete` default 0.
> After I create a local index, I don't know if this is a bug, I find this 
> below.
>  
> select * from TEST4PHOENIX.MYTEST;
> ||TAG_ID||USER_ID||TAG_NAME||IS_DELETE||
> |LB_001|10077110| test|0|
> |LB_002|10077110| test|1|
>  
> select * from TEST4PHOENIX.MYTEST_INDEX;
> ||0:TAG_NAME||0:IS_DELETE||:TAG_ID||:USER_ID||
> |test|0| |LB_00210077110|
> |test|0| |LB_00210077110|
>  
> First, `is_delete` has 2 different values, but `MYTEST_INDEX` only shows 0, 
> which is default value;
> Second, we can find that in `MYTEST_INDEX` column ''TAG_ID" and "USER_ID" 
> values have been changed. If query sql uses index, then we may return wrong 
> data. *While column value in `MYTEST` is right*
> Actually after I tried many times, I found that if I create a local index , 
> including a column which may be `tinyint` or `integer` and has been set a 
> dafault value, then the data in the index table will be wrong.
>  
> Is this a bug or I make a mistake?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2021-01-04 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Attachment: PHOENIX-6050.master.002.patch

> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 5.0.0
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6050-4.13-HBase-1.3.patch, 
> PHOENIX-6050.master.001.patch, PHOENIX-6050.master.002.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2021-01-03 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Attachment: PHOENIX-6050.master.001.patch

> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 5.0.0
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6050-4.13-HBase-1.3.patch, 
> PHOENIX-6050.master.001.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2021-01-03 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Fix Version/s: 5.1.0

> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 5.0.0
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6050-4.13-HBase-1.3.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2021-01-03 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Affects Version/s: 5.0.0

> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 5.0.0
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Attachments: PHOENIX-6050-4.13-HBase-1.3.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6272) PHOENIX-5592 introduces Dummy.java without ASF license

2020-12-30 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang reassigned PHOENIX-6272:
--

Assignee: Xinyi Yan  (was: Chao Wang)

> PHOENIX-5592 introduces Dummy.java without ASF license
> --
>
> Key: PHOENIX-6272
> URL: https://issues.apache.org/jira/browse/PHOENIX-6272
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: master
>Reporter: Istvan Toth
>Assignee: Xinyi Yan
>Priority: Major
>
> PHOENIX-5592 introduces a Dummy.java test file.
> It contains a commented out stub that could perhaps be used as a template, 
> but more importantly it misses an ASF Header.
> If this is was committed unintentionally, it should be removed, if it thought 
> to be useful as a template and kept, then at least the proper ASF license 
> header should be added.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6277) upsert into data error after HBASE-24850,HBASE-24754 merged

2020-12-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6277:
---
Attachment: PHOENIX-6277.master.002.patch

> upsert into data error after HBASE-24850,HBASE-24754 merged
> ---
>
> Key: PHOENIX-6277
> URL: https://issues.apache.org/jira/browse/PHOENIX-6277
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6277.master.001.patch, 
> PHOENIX-6277.master.002.patch
>
>
> when HBASE-24850,HBASE-24754 merged, CellComparator perf improvement. phoenix 
> use 
> deprecated interface that compare byteBufferkeyvalue and keyvalue. throw 
> "ByteBufferkeyvalue cannot be cast to keyvalue" when upsert data by 
> sqlline.py.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6277) upsert into data error after HBASE-24850,HBASE-24754 merged

2020-12-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6277:
---
Attachment: PHOENIX-6277.master.001.patch

> upsert into data error after HBASE-24850,HBASE-24754 merged
> ---
>
> Key: PHOENIX-6277
> URL: https://issues.apache.org/jira/browse/PHOENIX-6277
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6277.master.001.patch
>
>
> when HBASE-24850,HBASE-24754 merged, CellComparator perf improvement. phoenix 
> use 
> deprecated interface that compare byteBufferkeyvalue and keyvalue. throw 
> "ByteBufferkeyvalue cannot be cast to keyvalue" when upsert data by 
> sqlline.py.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6277) upsert into data error after HBASE-24850,HBASE-24754 merged

2020-12-22 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6277:
--

 Summary: upsert into data error after HBASE-24850,HBASE-24754 
merged
 Key: PHOENIX-6277
 URL: https://issues.apache.org/jira/browse/PHOENIX-6277
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0
Reporter: Chao Wang
Assignee: Chao Wang
 Fix For: 5.1.0


when HBASE-24850,HBASE-24754 merged, CellComparator perf improvement. phoenix 
use 

deprecated interface that compare byteBufferkeyvalue and keyvalue. throw 
"ByteBufferkeyvalue cannot be cast to keyvalue" when upsert data by sqlline.py.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6262) Bulk Load have a bug in lowercase tablename

2020-12-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6262:
---
Attachment: PHOENIX-6262.master.002.patch

> Bulk Load have a bug in lowercase tablename
> ---
>
> Key: PHOENIX-6262
> URL: https://issues.apache.org/jira/browse/PHOENIX-6262
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: zhengjiewen
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6262.master.001.patch, 
> PHOENIX-6262.master.002.patch
>
>
> h1. Bulk Load in lowercase tablename
> {color:#172b4d}when I use phoenix bulk load command to import csv file to 
> phoenix table,{color} there{color:#172b4d} are get error.{color}
> {code:java}
> //代码占位符
> Exception in thread "main" java.lang.IllegalArgumentException: Table 
> "test"."ods_om_om_order_test" not foundException in thread "main" 
> java.lang.IllegalArgumentException: Table "test"."ods_om_om_order_test" not 
> found at 
> org.apache.phoenix.util.SchemaUtil.generateColumnInfo(SchemaUtil.java:956) at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImportColumns(AbstractBulkLoadTool.java:377)
>  at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:211)
>  at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:180)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:313) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:227)
> {code}
> my command is :
> {code:java}
> hadoop jar 
> /opt/cloudera/parcels/CDH/lib/hbase/lib/phoenix-5.0.0-cdh6.2.0-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool -s \"\"test\"\" -t 
> \"\"ods_om_om_order_test\"\" -i 
> /tmp/phoenix/ods_om_om_order_test5/data.csv{code}
> {color:#172b4d}And I found the source code have a bug in 
> *org.apache.phoenix.jdbc.PhoenixDatabaseMetaData#*{color}*getColumns.*
> This method splices the tableName and schemaName into SQL statements to query 
> the System.catalog. but if your tableName or schemaName is lowercase,that 
> would be the *'"test"'* and *'"ods_om_om_order_test"'* so that will can not 
> query the result and then return table not found exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6262) Bulk Load have a bug in lowercase tablename

2020-12-16 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6262:
---
Attachment: PHOENIX-6262.master.001.patch

> Bulk Load have a bug in lowercase tablename
> ---
>
> Key: PHOENIX-6262
> URL: https://issues.apache.org/jira/browse/PHOENIX-6262
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: zhengjiewen
>Assignee: Chao Wang
>Priority: Major
> Attachments: PHOENIX-6262.master.001.patch
>
>
> h1. Bulk Load in lowercase tablename
> {color:#172b4d}when I use phoenix bulk load command to import csv file to 
> phoenix table,{color} there{color:#172b4d} are get error.{color}
> {code:java}
> //代码占位符
> Exception in thread "main" java.lang.IllegalArgumentException: Table 
> "test"."ods_om_om_order_test" not foundException in thread "main" 
> java.lang.IllegalArgumentException: Table "test"."ods_om_om_order_test" not 
> found at 
> org.apache.phoenix.util.SchemaUtil.generateColumnInfo(SchemaUtil.java:956) at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImportColumns(AbstractBulkLoadTool.java:377)
>  at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:211)
>  at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:180)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:313) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:227)
> {code}
> my command is :
> {code:java}
> hadoop jar 
> /opt/cloudera/parcels/CDH/lib/hbase/lib/phoenix-5.0.0-cdh6.2.0-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool -s \"\"test\"\" -t 
> \"\"ods_om_om_order_test\"\" -i 
> /tmp/phoenix/ods_om_om_order_test5/data.csv{code}
> {color:#172b4d}And I found the source code have a bug in 
> *org.apache.phoenix.jdbc.PhoenixDatabaseMetaData#*{color}*getColumns.*
> This method splices the tableName and schemaName into SQL statements to query 
> the System.catalog. but if your tableName or schemaName is lowercase,that 
> would be the *'"test"'* and *'"ods_om_om_order_test"'* so that will can not 
> query the result and then return table not found exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6262) Bulk Load have a bug in lowercase tablename

2020-12-14 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang reassigned PHOENIX-6262:
--

Assignee: Chao Wang

> Bulk Load have a bug in lowercase tablename
> ---
>
> Key: PHOENIX-6262
> URL: https://issues.apache.org/jira/browse/PHOENIX-6262
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: zhengjiewen
>Assignee: Chao Wang
>Priority: Major
>
> h1. Bulk Load in lowercase tablename
> {color:#172b4d}when I use phoenix bulk load command to import csv file to 
> phoenix table,{color} there{color:#172b4d} are get error.{color}
> {code:java}
> //代码占位符
> Exception in thread "main" java.lang.IllegalArgumentException: Table 
> "test"."ods_om_om_order_test" not foundException in thread "main" 
> java.lang.IllegalArgumentException: Table "test"."ods_om_om_order_test" not 
> found at 
> org.apache.phoenix.util.SchemaUtil.generateColumnInfo(SchemaUtil.java:956) at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.buildImportColumns(AbstractBulkLoadTool.java:377)
>  at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:211)
>  at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:180)
>  at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109) 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:313) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:227)
> {code}
> my command is :
> {code:java}
> hadoop jar 
> /opt/cloudera/parcels/CDH/lib/hbase/lib/phoenix-5.0.0-cdh6.2.0-client.jar 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool -s \"\"test\"\" -t 
> \"\"ods_om_om_order_test\"\" -i 
> /tmp/phoenix/ods_om_om_order_test5/data.csv{code}
> {color:#172b4d}And I found the source code have a bug in 
> *org.apache.phoenix.jdbc.PhoenixDatabaseMetaData#*{color}*getColumns.*
> This method splices the tableName and schemaName into SQL statements to query 
> the System.catalog. but if your tableName or schemaName is lowercase,that 
> would be the *'"test"'* and *'"ods_om_om_order_test5"'* so that will can not 
> query the result and then return table not found exception.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-11-30 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x.002.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-5860-4.x.001.patch, PHOENIX-5860-4.x.002.patch, 
> PHOENIX-5860-4.x.patch, PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-11-04 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x.001.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.001.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-6214) client connect throw NEWER_SCHEMA_FOUND exception after phoenix.schema.isNamespaceMappingEnabled is true

2020-11-04 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6214:
---
Description: 
after set phoenix.schema.isNamespaceMappingEnabled is true, input “use 
"SYSTEM"” by sqlline.py,and quit sqlline. there throw 
NewerSchemaAlreadyExistsException when get into sqlline again. As shown below:

org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): 
Schema with given name already exists 
schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: 
ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at 
org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111) 
at 
org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) 
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

  was:
after set phoenix.schema.isNamespaceMappingEnabled is true, input “use 
"SYSTEM"” by sqlline.py,and quit sqlline. throw 
NewerSchemaAlreadyExistsException when get into sqlline again. As shown below:

org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): 
Schema with given name already exists 
schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: 
ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at 
org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111) 
at 
org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) 
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)


> client connect throw NEWER_SCHEMA_FOUND exception after 
> phoenix.schema.isNamespaceMappingEnabled is true
> 
>
> Key: PHOENIX-6214
> URL: https://issues.apache.org/jira/browse/PHOENIX-6214
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
>
> after set phoenix.schema.isNamespaceMappingEnabled is true, input “use 
> "SYSTEM"” by sqlline.py,and quit sqlline. there throw 
> NewerSchemaAlreadyExistsException when get into sqlline again. As shown below:
> org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 
> (42M04): Schema with given name already exists 
> schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: 
> ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at 
> org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111)
>  at 
> org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) at 
> org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6214) client connect throw NEWER_SCHEMA_FOUND exception after phoenix.schema.isNamespaceMappingEnabled is true

2020-11-04 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6214:
--

 Summary: client connect throw NEWER_SCHEMA_FOUND exception after 
phoenix.schema.isNamespaceMappingEnabled is true
 Key: PHOENIX-6214
 URL: https://issues.apache.org/jira/browse/PHOENIX-6214
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.13.1
Reporter: Chao Wang
Assignee: Chao Wang


after set phoenix.schema.isNamespaceMappingEnabled is true, input “use 
"SYSTEM"” by sqlline.py,and quit sqlline. throw 
NewerSchemaAlreadyExistsException when get into sqlline again. As shown below:

org.apache.phoenix.schema.NewerSchemaAlreadyExistsException: ERROR 721 (42M04): 
Schema with given name already exists 
schemaName=SYSTEMorg.apache.phoenix.schema.NewerSchemaAlreadyExistsException: 
ERROR 721 (42M04): Schema with given name already exists schemaName=SYSTEM at 
org.apache.phoenix.schema.MetaDataClient.createSchema(MetaDataClient.java:4111) 
at 
org.apache.phoenix.compile.CreateSchemaCompiler$1.execute(CreateSchemaCompiler.java:46)
 at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:394) 
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377) 
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6209) Remove unused estimateParallelLevel()

2020-11-04 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6209:
---
Attachment: (was: PHOENIX-6209.master.v1.patch)

> Remove unused estimateParallelLevel()
> -
>
> Key: PHOENIX-6209
> URL: https://issues.apache.org/jira/browse/PHOENIX-6209
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Minor
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6209.master.001.patch
>
>
> there is code  that  like "parallelLevel2  = 
> CostUtil.estimateParallelLevel()" in HashJoinPlan.java, but parallelLevel2  
> does not use. so  we can remove parallelLevel2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6209) Remove unused estimateParallelLevel()

2020-11-04 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6209:
---
Attachment: PHOENIX-6209.master.001.patch

> Remove unused estimateParallelLevel()
> -
>
> Key: PHOENIX-6209
> URL: https://issues.apache.org/jira/browse/PHOENIX-6209
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Minor
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6209.master.001.patch, 
> PHOENIX-6209.master.v1.patch
>
>
> there is code  that  like "parallelLevel2  = 
> CostUtil.estimateParallelLevel()" in HashJoinPlan.java, but parallelLevel2  
> does not use. so  we can remove parallelLevel2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5227) Failed to build index for unexpected reason!

2020-10-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang reassigned PHOENIX-5227:
--

Assignee: Chao Wang

> Failed to build index for unexpected reason!
> 
>
> Key: PHOENIX-5227
> URL: https://issues.apache.org/jira/browse/PHOENIX-5227
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: CDH:6.0.1
> HBASE:2.0
>Reporter: mimimiracle
>Assignee: Chao Wang
>Priority: Blocker
>
> setp1:create a table
> step2:upsert into table 
> step3:create an index on table
> setp4:upsert into table new data  then it faild
> {panel}
> Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
> Failed 1 action: 
> org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed 
> to build index for unexpected reason!
>   at 
> org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:206)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:351)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1010)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1007)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1007)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.prepareMiniBatchOperations(HRegion.java:3487)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3896)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3785)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.VerifyError: 
> org/apache/phoenix/hbase/index/covered/data/IndexMemStore$1
>   at 
> org.apache.phoenix.hbase.index.covered.data.IndexMemStore.(IndexMemStore.java:82)
>   at 
> org.apache.phoenix.hbase.index.covered.LocalTableState.(LocalTableState.java:57)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.getIndexUpdate(NonTxIndexBuilder.java:52)
>   at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:90)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:503)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:348)
>   ... 18 more
> {panel}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5227) Failed to build index for unexpected reason!

2020-10-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang resolved PHOENIX-5227.

Fix Version/s: 5.1.0
   Resolution: Fixed

> Failed to build index for unexpected reason!
> 
>
> Key: PHOENIX-5227
> URL: https://issues.apache.org/jira/browse/PHOENIX-5227
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: CDH:6.0.1
> HBASE:2.0
>Reporter: mimimiracle
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 5.1.0
>
>
> setp1:create a table
> step2:upsert into table 
> step3:create an index on table
> setp4:upsert into table new data  then it faild
> {panel}
> Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
> Failed 1 action: 
> org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed 
> to build index for unexpected reason!
>   at 
> org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:206)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:351)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1010)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$28.call(RegionCoprocessorHost.java:1007)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1007)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.prepareMiniBatchOperations(HRegion.java:3487)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:3896)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3854)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3785)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1027)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:959)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:922)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2666)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42014)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: java.lang.VerifyError: 
> org/apache/phoenix/hbase/index/covered/data/IndexMemStore$1
>   at 
> org.apache.phoenix.hbase.index.covered.data.IndexMemStore.(IndexMemStore.java:82)
>   at 
> org.apache.phoenix.hbase.index.covered.LocalTableState.(LocalTableState.java:57)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.getIndexUpdate(NonTxIndexBuilder.java:52)
>   at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:90)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:503)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:348)
>   ... 18 more
> {panel}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6209) Remove unused estimateParallelLevel()

2020-10-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6209:
---
Description: there is code  that  like "parallelLevel2  = 
CostUtil.estimateParallelLevel()" in HashJoinPlan.java, but parallelLevel2  
does not use. so  we can remove parallelLevel2.  (was: there is code  that  
like "parallelLevel2  = CostUtil.estimateParallelLevel()" in HashJoinPlan.java, 
but parallelLevel2   is do not use. so  we can remove parallelLevel2.)

> Remove unused estimateParallelLevel()
> -
>
> Key: PHOENIX-6209
> URL: https://issues.apache.org/jira/browse/PHOENIX-6209
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Minor
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6209.master.v1.patch
>
>
> there is code  that  like "parallelLevel2  = 
> CostUtil.estimateParallelLevel()" in HashJoinPlan.java, but parallelLevel2  
> does not use. so  we can remove parallelLevel2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6209) Remove unused estimateParallelLevel()

2020-10-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6209:
---
Attachment: PHOENIX-6209.master.v1.patch

> Remove unused estimateParallelLevel()
> -
>
> Key: PHOENIX-6209
> URL: https://issues.apache.org/jira/browse/PHOENIX-6209
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Minor
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-6209.master.v1.patch
>
>
> there is code  that  like "parallelLevel2  = 
> CostUtil.estimateParallelLevel()" in HashJoinPlan.java, but parallelLevel2   
> is do not use. so  we can remove parallelLevel2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6209) Remove unused estimateParallelLevel()

2020-10-27 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6209:
--

 Summary: Remove unused estimateParallelLevel()
 Key: PHOENIX-6209
 URL: https://issues.apache.org/jira/browse/PHOENIX-6209
 Project: Phoenix
  Issue Type: Improvement
  Components: core
Affects Versions: 5.1.0, 4.16.0
Reporter: Chao Wang
Assignee: Chao Wang
 Fix For: 5.1.0, 4.16.0


there is code  that  like "parallelLevel2  = CostUtil.estimateParallelLevel()" 
in HashJoinPlan.java, but parallelLevel2   is do not use. so  we can remove 
parallelLevel2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v2.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v2.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v2.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-09 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-09 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860.4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-05 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Affects Version/s: 4.x

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-6112) Coupling of two classes only use logger

2020-08-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6112:
---
Attachment: PHOENIX-6112.master.patch

> Coupling of two classes only use logger
> ---
>
> Key: PHOENIX-6112
> URL: https://issues.apache.org/jira/browse/PHOENIX-6112
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 4.x, master
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Minor
> Attachments: PHOENIX-6112-4.x.patch, PHOENIX-6112.master.patch, 
> image-2020-08-28-14-48-34-990.png
>
>
> PhoenixConfigurationUtil use BaseResultIterators.logger for print log. I 
> think this is inappropriate, Coupling of two classes. geeneral, we print log 
> for using local class.
> !image-2020-08-28-14-48-34-990.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6112) Coupling of two classes only use logger

2020-08-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6112:
---
Attachment: PHOENIX-6112-4.x.patch

> Coupling of two classes only use logger
> ---
>
> Key: PHOENIX-6112
> URL: https://issues.apache.org/jira/browse/PHOENIX-6112
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 4.x, master
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Minor
> Attachments: PHOENIX-6112-4.x.patch, image-2020-08-28-14-48-34-990.png
>
>
> PhoenixConfigurationUtil use BaseResultIterators.logger for print log. I 
> think this is inappropriate, Coupling of two classes. geeneral, we print log 
> for using local class.
> !image-2020-08-28-14-48-34-990.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6112) Coupling of two classes only use logger

2020-08-28 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6112:
--

 Summary: Coupling of two classes only use logger
 Key: PHOENIX-6112
 URL: https://issues.apache.org/jira/browse/PHOENIX-6112
 Project: Phoenix
  Issue Type: Improvement
  Components: core
Affects Versions: 4.x, master
Reporter: Chao Wang
Assignee: Chao Wang
 Attachments: image-2020-08-28-14-48-34-990.png

PhoenixConfigurationUtil use BaseResultIterators.logger for print log. I think 
this is inappropriate, Coupling of two classes. geeneral, we print log for 
using local class.

!image-2020-08-28-14-48-34-990.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860.4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860.4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860.4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860.4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860.4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Created] (PHOENIX-6108) The patch does not appear to apply with p0 to p2 in PreCommit jenkins

2020-08-26 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6108:
--

 Summary: The patch does not appear to apply with p0 to p2 in 
PreCommit jenkins
 Key: PHOENIX-6108
 URL: https://issues.apache.org/jira/browse/PHOENIX-6108
 Project: Phoenix
  Issue Type: Bug
Reporter: Chao Wang


I find a issue, if we submit more version patch,  multiple patches will 
interact with each other.   throw this error in Precommit jenkins in hadoop QA, 
which  is "The patch does not appear to apply with p0 to p2". if we delete more 
than one and leave only one patch, this patch is apply in 4.x bransh in 
Precommit jenkins.

phoenix CI change a new QA jenkins recently, I guess if new jenkins have a 
problems.  I do not find this issue in old CI Environment previously.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860.4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860.4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v4.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-v10-4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v11.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v6.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v7.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v5.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v8.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-26 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v3.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2020-08-24 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Attachment: PHOENIX-6050-4.13-HBase-1.3.patch

> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Attachments: PHOENIX-6050-4.13-HBase-1.3.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-23 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v11.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v11.patch, PHOENIX-5860-4.x-v3.patch, 
> PHOENIX-5860-4.x-v4.patch, PHOENIX-5860-4.x-v5.patch, 
> PHOENIX-5860-4.x-v6.patch, PHOENIX-5860-4.x-v7.patch, 
> PHOENIX-5860-4.x-v8.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860-v10-4.x.patch, PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-v10-4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860-4.x-v5.patch, PHOENIX-5860-4.x-v6.patch, 
> PHOENIX-5860-4.x-v7.patch, PHOENIX-5860-4.x-v8.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860-v10-4.x.patch, PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v8.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860-4.x-v5.patch, PHOENIX-5860-4.x-v6.patch, 
> PHOENIX-5860-4.x-v7.patch, PHOENIX-5860-4.x-v8.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v7.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860-4.x-v5.patch, PHOENIX-5860-4.x-v6.patch, 
> PHOENIX-5860-4.x-v7.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v6.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860-4.x-v5.patch, PHOENIX-5860-4.x-v6.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v5.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860-4.x-v5.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v4.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860-4.x.patch, PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v3.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-20 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x-v2.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-20 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v2.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v2.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-19 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Fix Version/s: 4.x

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-19 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Affects Version/s: (was: 4.x)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-17 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Affects Version/s: 4.x

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-17 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-17 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-17 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> 

[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2020-07-29 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Description: 
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). 

code is:

Properties properties = new Properties();
 properties.setProperty("phoenix.query.threadPoolSize","300");
 PropertiesResolve phoenixpr = new PropertiesResolve();
 String phoenixdriver = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_driver");
 String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_jdbc");
 Class.forName(phoenixdriver);
 return DriverManager.getConnection(phoenixjdbc,properties);

throw is:

Error: Task org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 
rejected from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 
128, active threads = 128, queued tasks = 5000, completed tasks = 36647] 
(state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: Task 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
active threads = 128, queued tasks = 5000, completed tasks = 36647]
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)

^Reason:^

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .

  was:
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). 

code is:

Properties properties = new Properties();
 properties.setProperty("phoenix.query.threadPoolSize","300");
 PropertiesResolve phoenixpr = new PropertiesResolve();
 String phoenixdriver = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_driver");
 String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_jdbc");
 Class.forName(phoenixdriver);
 return DriverManager.getConnection(phoenixjdbc,properties);

throw is:

Error: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
 pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 
36647] (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
 pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 
36647]
 at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
 at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
 at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)

^Reason:^

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .


> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> 

[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2020-07-29 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Description: 
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). 

code is:

Properties properties = new Properties();
 properties.setProperty("phoenix.query.threadPoolSize","300");
 PropertiesResolve phoenixpr = new PropertiesResolve();
 String phoenixdriver = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_driver");
 String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_jdbc");
 Class.forName(phoenixdriver);
 return DriverManager.getConnection(phoenixjdbc,properties);

throw is:

Error: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
 pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 
36647] (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
 pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 
36647]
 at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
 at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
 at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)

^Reason:^

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .

  was:
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). 

code is:

Properties properties = new Properties();
 properties.setProperty("phoenix.query.threadPoolSize","300");
 PropertiesResolve phoenixpr = new PropertiesResolve();
 String phoenixdriver = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_driver");
 String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_jdbc");
 Class.forName(phoenixdriver);
 return DriverManager.getConnection(phoenixjdbc,properties);

throw is:

Error: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],%20pool%20size%20=%20128,%20active%20threads%20=%20128,%20queued%20tasks%20=%205000,%20completed%20tasks%20=%2036647]
 (state=08000,code=101)
 org.apache.phoenix.exception.PhoenixIOException: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],%20pool%20size%20=%20128,%20active%20threads%20=%20128,%20queued%20tasks%20=%205000,%20completed%20tasks%20=%2036647]

^Reason:^

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .


> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> [org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
>  rejected from 
> 

[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2020-07-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Description: 
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). 

code is:

Properties properties = new Properties();
 properties.setProperty("phoenix.query.threadPoolSize","300");
 PropertiesResolve phoenixpr = new PropertiesResolve();
 String phoenixdriver = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_driver");
 String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_jdbc");
 Class.forName(phoenixdriver);
 return DriverManager.getConnection(phoenixjdbc,properties);

throw is:

Error: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],%20pool%20size%20=%20128,%20active%20threads%20=%20128,%20queued%20tasks%20=%205000,%20completed%20tasks%20=%2036647]
 (state=08000,code=101)
 org.apache.phoenix.exception.PhoenixIOException: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],%20pool%20size%20=%20128,%20active%20threads%20=%20128,%20queued%20tasks%20=%205000,%20completed%20tasks%20=%2036647]

^Reason:^

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .

  was:
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). 

code is:

Properties properties = new Properties();
 properties.setProperty("phoenix.query.threadPoolSize","300");
 PropertiesResolve phoenixpr = new PropertiesResolve();
 String phoenixdriver = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_driver");
 String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_jdbc");
 Class.forName(phoenixdriver);
 return DriverManager.getConnection(phoenixjdbc,properties);

throw is:

Error: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
 pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 
36647] (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
 pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 
36647]

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .


> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> [org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
>  rejected from 
> [org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],%20pool%20size%20=%20128,%20active%20threads%20=%20128,%20queued%20tasks%20=%205000,%20completed%20tasks%20=%2036647]
>  (state=08000,code=101)
>  org.apache.phoenix.exception.PhoenixIOException: 

[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2020-07-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Description: 
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). 

code is:

Properties properties = new Properties();
 properties.setProperty("phoenix.query.threadPoolSize","300");
 PropertiesResolve phoenixpr = new PropertiesResolve();
 String phoenixdriver = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_driver");
 String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
"phoenix_jdbc");
 Class.forName(phoenixdriver);
 return DriverManager.getConnection(phoenixjdbc,properties);

throw is:

Error: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
 pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 
36647] (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: Task 
[org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
 rejected from 
[org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
 pool size = 128, active threads = 128, queued tasks = 5000, completed tasks = 
36647]

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .

  was:
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). As shown in the 
following image.

 

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .


> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> [org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
>  rejected from 
> [org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
>  pool size = 128, active threads = 128, queued tasks = 5000, completed tasks 
> = 36647] (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> [org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893|mailto:org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893]
>  rejected from 
> [org.apache.phoenix.job.JobManager$1@26ae880a[Running|mailto:org.apache.phoenix.job.JobManager$1@26ae880a[Running],
>  pool size = 128, active threads = 128, queued tasks = 5000, completed tasks 
> = 36647]
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2020-07-28 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Description: 
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). As shown in the 
following image.

 

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .

  was:
I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). As shown in the 
following image.

!1.png!

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .


> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). As shown in the 
> following image.
>  
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6050) Set properties is invalid in client

2020-07-28 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6050:
--

 Summary: Set properties is invalid in client
 Key: PHOENIX-6050
 URL: https://issues.apache.org/jira/browse/PHOENIX-6050
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.13.1
 Environment: phoenix 4.13.1

hbase 1.3.1
Reporter: Chao Wang
Assignee: Chao Wang


I set properties in client, which are "phoenix.query.threadPoolSize", but this 
is invalid.  ThreadPool always use default value (128). As shown in the 
following image.

!1.png!

I find PhoenixDriver create threadpool before init config from properties. when 
create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6011) ServerCacheClient throw NullPointerException

2020-07-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang resolved PHOENIX-6011.

Resolution: Fixed

> ServerCacheClient throw NullPointerException
> 
>
> Key: PHOENIX-6011
> URL: https://issues.apache.org/jira/browse/PHOENIX-6011
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 4.x, master
>
> Attachments: 1.png, PHOENIX-6011-v1.patch, PHOENIX-6011-v2.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Throw NullPointerException in my product Environment now,  As shown in the 
> following image. 
> !1.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-07-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Affects Version/s: (was: 4.14.3)
   (was: 4.14.2)
   (was: 4.14.1)
   (was: 4.15.0)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> 

[jira] [Reopened] (PHOENIX-6011) ServerCacheClient throw NullPointerException

2020-07-19 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang reopened PHOENIX-6011:


> ServerCacheClient throw NullPointerException
> 
>
> Key: PHOENIX-6011
> URL: https://issues.apache.org/jira/browse/PHOENIX-6011
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 4.x, master
>
> Attachments: 1.png, PHOENIX-6011-v1.patch, PHOENIX-6011-v2.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Throw NullPointerException in my product Environment now,  As shown in the 
> following image. 
> !1.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6011) ServerCacheClient throw NullPointerException

2020-07-15 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6011:
---
Attachment: PHOENIX-6011-v2.patch

> ServerCacheClient throw NullPointerException
> 
>
> Key: PHOENIX-6011
> URL: https://issues.apache.org/jira/browse/PHOENIX-6011
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Attachments: 1.png, PHOENIX-6011-v1.patch, PHOENIX-6011-v2.patch
>
>
> Throw NullPointerException in my product Environment now,  As shown in the 
> following image. 
> !1.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6011) ServerCacheClient throw NullPointerException

2020-07-15 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6011:
---
Attachment: PHOENIX-6011-v1.patch

> ServerCacheClient throw NullPointerException
> 
>
> Key: PHOENIX-6011
> URL: https://issues.apache.org/jira/browse/PHOENIX-6011
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Attachments: 1.png, PHOENIX-6011-v1.patch
>
>
> Throw NullPointerException in my product Environment now,  As shown in the 
> following image. 
> !1.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6011) ServerCacheClient throw NullPointerException

2020-07-15 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6011:
---
Affects Version/s: 4.13.1
   5.0.0

> ServerCacheClient throw NullPointerException
> 
>
> Key: PHOENIX-6011
> URL: https://issues.apache.org/jira/browse/PHOENIX-6011
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1, 5.0.0
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Attachments: 1.png
>
>
> Throw NullPointerException in my product Environment now,  As shown in the 
> following image. 
> !1.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6011) ServerCacheClient throw NullPointerException

2020-07-15 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6011:
---
Affects Version/s: (was: 5.0.0)

> ServerCacheClient throw NullPointerException
> 
>
> Key: PHOENIX-6011
> URL: https://issues.apache.org/jira/browse/PHOENIX-6011
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Attachments: 1.png
>
>
> Throw NullPointerException in my product Environment now,  As shown in the 
> following image. 
> !1.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6011) ServerCacheClient throw NullPointerException

2020-07-15 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6011:
--

 Summary: ServerCacheClient throw NullPointerException
 Key: PHOENIX-6011
 URL: https://issues.apache.org/jira/browse/PHOENIX-6011
 Project: Phoenix
  Issue Type: Bug
Reporter: Chao Wang
Assignee: Chao Wang
 Attachments: 1.png

Throw NullPointerException in my product Environment now,  As shown in the 
following image. 

!1.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-07-07 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Description: 
Currently delete data is UngroupedAggregateRegionObserver class  on server 
side, this class check if isRegionClosingOrSplitting is true. when 
isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
unable to write from scan because region is closing or splitting"). 

when region online , which initialize phoenix CP that 
isRegionClosingOrSplitting  is false.before region split, region change  
isRegionClosingOrSplitting to true.but if region split failed,split will roll 
back where not change   isRegionClosingOrSplitting  to false. after that all 
write  opration will always throw exception which is Temporarily unable to 
write from scan because region is closing or splitting。

so we should change isRegionClosingOrSplitting   to false  when region 
preRollBackSplit in UngroupedAggregateRegionObserver class。

A simple test where a data table split failed, then roll back success.but 
delete data always throw exception.
 # create data table 
 # bulkload data for this table
 # alter hbase-server code, which region split will throw exception , then 
rollback.
 # use hbase shell , split region
 # view regionserver log, where region split failed, and then rollback success.
 # user phoenix sqlline.py for delete data, which  will throw exption

 Caused by: java.io.IOException: Temporarily unable to write from scan because 
region is closing or splitting Caused by: java.io.IOException: Temporarily 
unable to write from scan because region is closing or splitting at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
 at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
 at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
 ... 5 more
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
 at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
 at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
 at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
 at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
 at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
 at 
org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) at 
org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
 at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
 at 
com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
 at 
com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
 at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) 
at 
com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
 at 
com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
 at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
 at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
 at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at 
org.apache.spark.scheduler.Task.run(Task.scala:99) at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

 

  was:
Currently delete data is UngroupedAggregateRegionObserver class  on server 
side, this class 

[jira] [Updated] (PHOENIX-5861) Delete index data failed,due to pool closed

2020-07-07 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5861:
---
Description: 
when delete index data,throw pool closed in 
TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
client side, In the case of index table enable,Indexer will deal index data in 
server side. Finally server use HTable of index table  which batch mutations. 

When region split, region close firstly,and then region will close phoenix 
CP(Indexer) which call stop method  of Indexer. this method will stop  
IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
start to roll back, which can not processing IndexWriter  and IndexBuildManager 
,recoveryWriter initialization. afterwards deal of index data will failed which 
throw pool close.

A simple test where region split failed, roll back success. but failed to 
delete index data.

1.create data table and index table

2.bulkload data for this table

3.alter hbase-server code, which region split throw exception , after region 
close happen. 

4.use hbase shell , split region.

5.view regionserver log, where region split failed, and then rollback success.

6.user phoenix sqlline.py for delete data, which  will throw exption

 

org.apache.phoenix.hbase.index.exception.SingleIndexWriteFailureException: Pool 
closed, not attempting to write to the 
index!org.apache.phoenix.hbase.index.exception.SingleIndexWriteFailureException:
 Pool closed, not attempting to write to the index! at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter$1.throwFailureIfDone(TrackingParallelWriterIndexCommitter.java:196)
 at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter$1.call(TrackingParallelWriterIndexCommitter.java:155)
 at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter$1.call(TrackingParallelWriterIndexCommitter.java:144)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

 

  was:
when delete index data,throw pool closed in 
TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
client side, In the case of index table enable,Indexer will deal index data in 
server side. Finally server use HTable of index table  which batch mutations. 

When region split, region close firstly,and then region will close phoenix 
CP(Indexer) which call stop method  of Indexer. this method will stop  
IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
start to roll back, which can not processing IndexWriter  and IndexBuildManager 
,recoveryWriter initialization. afterwards deal of index data will failed which 
throw pool close.

A simple test where region split failed, roll back success. but failed to 
delete index data.

1.create data table and index table

2.bulkload data for this table

3.alter hbase-server code, which region split throw exception , after region 
close happen. 

4.use hbase shell , split region.

5.view regionserver log, where region split failed, and then rollback success.

6.user phoenix sqlline.py for delete data, which  will throw exption

 


> Delete index data failed,due to pool closed
> ---
>
> Key: PHOENIX-5861
> URL: https://issues.apache.org/jira/browse/PHOENIX-5861
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.15.0, 4.14.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Critical
> Attachments: PHOENIX-5861.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> when delete index data,throw pool closed in 
> TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
> client side, In the case of index table enable,Indexer will deal index data 
> in server side. Finally server use HTable of index table  which batch 
> mutations. 
> When region split, region close firstly,and then region will close phoenix 
> CP(Indexer) which call stop method  of Indexer. this method will stop  
> IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
> start to roll back, which can not processing IndexWriter  and 
> IndexBuildManager ,recoveryWriter initialization. afterwards deal of index 
> data will failed which throw pool close.
> A simple test where region split failed, roll back success. but failed to 
> delete index data.
> 1.create data table and index table
> 2.bulkload data for this table
> 3.alter hbase-server code, which region split throw exception , after region 
> close happen. 
> 4.use hbase shell , split 

[jira] [Updated] (PHOENIX-5861) Delete index data failed,due to pool closed

2020-07-06 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5861:
---
Attachment: (was: PHOENIX-5861.4.13.x-HBASE.1.3.x.003.patch)

> Delete index data failed,due to pool closed
> ---
>
> Key: PHOENIX-5861
> URL: https://issues.apache.org/jira/browse/PHOENIX-5861
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.15.0, 4.14.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Critical
> Attachments: PHOENIX-5861.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> when delete index data,throw pool closed in 
> TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
> client side, In the case of index table enable,Indexer will deal index data 
> in server side. Finally server use HTable of index table  which batch 
> mutations. 
> When region split, region close firstly,and then region will close phoenix 
> CP(Indexer) which call stop method  of Indexer. this method will stop  
> IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
> start to roll back, which can not processing IndexWriter  and 
> IndexBuildManager ,recoveryWriter initialization. afterwards deal of index 
> data will failed which throw pool close.
> A simple test where region split failed, roll back success. but failed to 
> delete index data.
> 1.create data table and index table
> 2.bulkload data for this table
> 3.alter hbase-server code, which region split throw exception , after region 
> close happen. 
> 4.use hbase shell , split region.
> 5.view regionserver log, where region split failed, and then rollback success.
> 6.user phoenix sqlline.py for delete data, which  will throw exption
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5861) Delete index data failed,due to pool closed

2020-07-06 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5861:
---
Attachment: PHOENIX-5861.4.13.x-HBASE.1.3.x.003.patch

> Delete index data failed,due to pool closed
> ---
>
> Key: PHOENIX-5861
> URL: https://issues.apache.org/jira/browse/PHOENIX-5861
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.15.0, 4.14.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Critical
> Attachments: PHOENIX-5861.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5861.4.13.x-HBASE.1.3.x.003.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> when delete index data,throw pool closed in 
> TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
> client side, In the case of index table enable,Indexer will deal index data 
> in server side. Finally server use HTable of index table  which batch 
> mutations. 
> When region split, region close firstly,and then region will close phoenix 
> CP(Indexer) which call stop method  of Indexer. this method will stop  
> IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
> start to roll back, which can not processing IndexWriter  and 
> IndexBuildManager ,recoveryWriter initialization. afterwards deal of index 
> data will failed which throw pool close.
> A simple test where region split failed, roll back success. but failed to 
> delete index data.
> 1.create data table and index table
> 2.bulkload data for this table
> 3.alter hbase-server code, which region split throw exception , after region 
> close happen. 
> 4.use hbase shell , split region.
> 5.view regionserver log, where region split failed, and then rollback success.
> 6.user phoenix sqlline.py for delete data, which  will throw exption
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5861) Delete index data failed,due to pool closed

2020-06-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5861:
---
Priority: Critical  (was: Trivial)

> Delete index data failed,due to pool closed
> ---
>
> Key: PHOENIX-5861
> URL: https://issues.apache.org/jira/browse/PHOENIX-5861
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.15.0, 4.14.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Critical
> Attachments: PHOENIX-5861.4.13.x-HBASE.1.3.x.002.patch
>
>
> when delete index data,throw pool closed in 
> TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
> client side, In the case of index table enable,Indexer will deal index data 
> in server side. Finally server use HTable of index table  which batch 
> mutations. 
> When region split, region close firstly,and then region will close phoenix 
> CP(Indexer) which call stop method  of Indexer. this method will stop  
> IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
> start to roll back, which can not processing IndexWriter  and 
> IndexBuildManager ,recoveryWriter initialization. afterwards deal of index 
> data will failed which throw pool close.
> A simple test where region split failed, roll back success. but failed to 
> delete index data.
> 1.create data table and index table
> 2.bulkload data for this table
> 3.alter hbase-server code, which region split throw exception , after region 
> close happen. 
> 4.use hbase shell , split region.
> 5.view regionserver log, where region split failed, and then rollback success.
> 6.user phoenix sqlline.py for delete data, which  will throw exption
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5861) Delete index data failed,due to pool closed

2020-06-22 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5861:
---
Priority: Trivial  (was: Critical)

> Delete index data failed,due to pool closed
> ---
>
> Key: PHOENIX-5861
> URL: https://issues.apache.org/jira/browse/PHOENIX-5861
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.15.0, 4.14.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Trivial
> Attachments: PHOENIX-5861.4.13.x-HBASE.1.3.x.002.patch
>
>
> when delete index data,throw pool closed in 
> TrackingParallelWriterIndexCommitter class.sql statement delete from ... for 
> client side, In the case of index table enable,Indexer will deal index data 
> in server side. Finally server use HTable of index table  which batch 
> mutations. 
> When region split, region close firstly,and then region will close phoenix 
> CP(Indexer) which call stop method  of Indexer. this method will stop  
> IndexWriter and IndexBuildManager ,recoveryWriter. But region split failed, 
> start to roll back, which can not processing IndexWriter  and 
> IndexBuildManager ,recoveryWriter initialization. afterwards deal of index 
> data will failed which throw pool close.
> A simple test where region split failed, roll back success. but failed to 
> delete index data.
> 1.create data table and index table
> 2.bulkload data for this table
> 3.alter hbase-server code, which region split throw exception , after region 
> close happen. 
> 4.use hbase shell , split region.
> 5.view regionserver log, where region split failed, and then rollback success.
> 6.user phoenix sqlline.py for delete data, which  will throw exption
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5971) Triple performance reduction of delete operation from 4.4.0 Upgrade to 4.13.1

2020-06-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang reassigned PHOENIX-5971:
--

Assignee: Chao Wang

> Triple performance reduction of delete operation from 4.4.0 Upgrade to 4.13.1
> -
>
> Key: PHOENIX-5971
> URL: https://issues.apache.org/jira/browse/PHOENIX-5971
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
> Environment: 200+ node
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
>
> phoenix from 4.4.0 Upgrade to 4.13.1 in my environmental, delete operation 
> triple performance reduction. for example a sql (delete  ...  from ...) will 
> delete million of data, and then hundreds of concurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5971) Triple performance reduction of delete operation from 4.4.0 Upgrade to 4.13.1

2020-06-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5971:
---
Environment: 200+ node

> Triple performance reduction of delete operation from 4.4.0 Upgrade to 4.13.1
> -
>
> Key: PHOENIX-5971
> URL: https://issues.apache.org/jira/browse/PHOENIX-5971
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
> Environment: 200+ node
>Reporter: Chao Wang
>Priority: Major
>
> phoenix from 4.4.0 Upgrade to 4.13.1 in my environmental, delete operation 
> triple performance reduction. for example a sql (delete  ...  from ...) will 
> delete million of data, and then hundreds of concurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5971) Triple performance reduction of delete operation from 4.4.0 Upgrade to 4.13.1

2020-06-21 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-5971:
--

 Summary: Triple performance reduction of delete operation from 
4.4.0 Upgrade to 4.13.1
 Key: PHOENIX-5971
 URL: https://issues.apache.org/jira/browse/PHOENIX-5971
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.13.1
Reporter: Chao Wang


phoenix from 4.4.0 Upgrade to 4.13.1 in my environmental, delete operation 
triple performance reduction. for example a sql (delete  ...  from ...) will 
delete million of data, and then hundreds of concurrent. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >