[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v5.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860-4.x-v5.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v4.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x-v4.patch, 
> PHOENIX-5860-4.x.patch, PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun

[jira] [Updated] (PHOENIX-6090) Local indexes get out of sync after changes for global consistent indexes

2020-08-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6090:
---
Attachment: 6090-fix-v4-4.x.txt

> Local indexes get out of sync after changes for global consistent indexes
> -
>
> Key: PHOENIX-6090
> URL: https://issues.apache.org/jira/browse/PHOENIX-6090
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: 6090-fix-4.x.txt, 6090-fix-v2-4.x.txt, 
> 6090-fix-v3-4.x.txt, 6090-fix-v4-4.x.txt, 6090-test-4.x.txt, 
> 6090-test-v2-4.x.txt
>
>
> {code:java}
>  > select /*+ NO_INDEX */ count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 522244   |
> +--+
> 1 row selected (1.213 seconds)
> > select count(*) from test;
> +-+
> | COUNT(1) |
> +--+
> | 522245   |
> +--+
> 1 row selected (1.23 seconds)
> {code}
>  
> This was after I did some insert and a bunch of splits (but not in parallel).
> It's not, yet, clear under what circumstances that exactly happens. Just that 
> after a while it happens.
> This is Phoenix built from master and HBase built from branch-2.3. (Client 
> and server versions of HBase are matching).
> I've since tried with Phoenix 4.x and see the same issue - also see attached 
> tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6090) Local indexes get out of sync after changes for global consistent indexes

2020-08-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6090:
---
Attachment: 6090-fix-v3-4.x.txt

> Local indexes get out of sync after changes for global consistent indexes
> -
>
> Key: PHOENIX-6090
> URL: https://issues.apache.org/jira/browse/PHOENIX-6090
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: 6090-fix-4.x.txt, 6090-fix-v2-4.x.txt, 
> 6090-fix-v3-4.x.txt, 6090-test-4.x.txt, 6090-test-v2-4.x.txt
>
>
> {code:java}
>  > select /*+ NO_INDEX */ count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 522244   |
> +--+
> 1 row selected (1.213 seconds)
> > select count(*) from test;
> +-+
> | COUNT(1) |
> +--+
> | 522245   |
> +--+
> 1 row selected (1.23 seconds)
> {code}
>  
> This was after I did some insert and a bunch of splits (but not in parallel).
> It's not, yet, clear under what circumstances that exactly happens. Just that 
> after a while it happens.
> This is Phoenix built from master and HBase built from branch-2.3. (Client 
> and server versions of HBase are matching).
> I've since tried with Phoenix 4.x and see the same issue - also see attached 
> tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6090) Local indexes get out of sync after changes for global consistent indexes

2020-08-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6090:
---
Attachment: 6090-fix-v2-4.x.txt

> Local indexes get out of sync after changes for global consistent indexes
> -
>
> Key: PHOENIX-6090
> URL: https://issues.apache.org/jira/browse/PHOENIX-6090
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: 6090-fix-4.x.txt, 6090-fix-v2-4.x.txt, 
> 6090-test-4.x.txt, 6090-test-v2-4.x.txt
>
>
> {code:java}
>  > select /*+ NO_INDEX */ count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 522244   |
> +--+
> 1 row selected (1.213 seconds)
> > select count(*) from test;
> +-+
> | COUNT(1) |
> +--+
> | 522245   |
> +--+
> 1 row selected (1.23 seconds)
> {code}
>  
> This was after I did some insert and a bunch of splits (but not in parallel).
> It's not, yet, clear under what circumstances that exactly happens. Just that 
> after a while it happens.
> This is Phoenix built from master and HBase built from branch-2.3. (Client 
> and server versions of HBase are matching).
> I've since tried with Phoenix 4.x and see the same issue - also see attached 
> tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6090) Local indexes get out of sync after changes for global consistent indexes

2020-08-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-6090:
--

Assignee: Lars Hofhansl  (was: Kadir OZDEMIR)

> Local indexes get out of sync after changes for global consistent indexes
> -
>
> Key: PHOENIX-6090
> URL: https://issues.apache.org/jira/browse/PHOENIX-6090
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: 6090-fix-4.x.txt, 6090-test-4.x.txt, 6090-test-v2-4.x.txt
>
>
> {code:java}
>  > select /*+ NO_INDEX */ count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 522244   |
> +--+
> 1 row selected (1.213 seconds)
> > select count(*) from test;
> +-+
> | COUNT(1) |
> +--+
> | 522245   |
> +--+
> 1 row selected (1.23 seconds)
> {code}
>  
> This was after I did some insert and a bunch of splits (but not in parallel).
> It's not, yet, clear under what circumstances that exactly happens. Just that 
> after a while it happens.
> This is Phoenix built from master and HBase built from branch-2.3. (Client 
> and server versions of HBase are matching).
> I've since tried with Phoenix 4.x and see the same issue - also see attached 
> tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6093) adding hashcode to phoenix pherf Column class

2020-08-21 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-6093:
---
Attachment: PHOENIX-6093.patch

> adding hashcode to phoenix pherf Column class
> -
>
> Key: PHOENIX-6093
> URL: https://issues.apache.org/jira/browse/PHOENIX-6093
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-6093.patch
>
>
> The pherf Column class overrides equals but not hashcode. Adding hashcode so 
> that we can fully support serial upsert in numerical data types. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6093) adding hashcode to phoenix pherf Column class

2020-08-21 Thread Xinyi Yan (Jira)
Xinyi Yan created PHOENIX-6093:
--

 Summary: adding hashcode to phoenix pherf Column class
 Key: PHOENIX-6093
 URL: https://issues.apache.org/jira/browse/PHOENIX-6093
 Project: Phoenix
  Issue Type: Improvement
Reporter: Xinyi Yan
Assignee: Xinyi Yan


The pherf Column class overrides equals but not hashcode. Adding hashcode so 
that we can fully support serial upsert in numerical data types. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6090) Local indexes get out of sync after changes for global consistent indexes

2020-08-21 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6090:
---
Attachment: 6090-fix-4.x.txt

> Local indexes get out of sync after changes for global consistent indexes
> -
>
> Key: PHOENIX-6090
> URL: https://issues.apache.org/jira/browse/PHOENIX-6090
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.0
>Reporter: Lars Hofhansl
>Assignee: Kadir OZDEMIR
>Priority: Blocker
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: 6090-fix-4.x.txt, 6090-test-4.x.txt, 6090-test-v2-4.x.txt
>
>
> {code:java}
>  > select /*+ NO_INDEX */ count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 522244   |
> +--+
> 1 row selected (1.213 seconds)
> > select count(*) from test;
> +-+
> | COUNT(1) |
> +--+
> | 522245   |
> +--+
> 1 row selected (1.23 seconds)
> {code}
>  
> This was after I did some insert and a bunch of splits (but not in parallel).
> It's not, yet, clear under what circumstances that exactly happens. Just that 
> after a while it happens.
> This is Phoenix built from master and HBase built from branch-2.3. (Client 
> and server versions of HBase are matching).
> I've since tried with Phoenix 4.x and see the same issue - also see attached 
> tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6090) Local indexes get out of sync after changes for global consistent indexes

2020-08-21 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6090:
-
Affects Version/s: 4.16.0
   4.15.0

> Local indexes get out of sync after changes for global consistent indexes
> -
>
> Key: PHOENIX-6090
> URL: https://issues.apache.org/jira/browse/PHOENIX-6090
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.0
>Reporter: Lars Hofhansl
>Assignee: Kadir OZDEMIR
>Priority: Blocker
> Fix For: 5.1.0, 4.15.1, 4.16.0
>
> Attachments: 6090-test-4.x.txt, 6090-test-v2-4.x.txt
>
>
> {code:java}
>  > select /*+ NO_INDEX */ count(*) from test;
> +--+
> | COUNT(1) |
> +--+
> | 522244   |
> +--+
> 1 row selected (1.213 seconds)
> > select count(*) from test;
> +-+
> | COUNT(1) |
> +--+
> | 522245   |
> +--+
> 1 row selected (1.23 seconds)
> {code}
>  
> This was after I did some insert and a bunch of splits (but not in parallel).
> It's not, yet, clear under what circumstances that exactly happens. Just that 
> after a while it happens.
> This is Phoenix built from master and HBase built from branch-2.3. (Client 
> and server versions of HBase are matching).
> I've since tried with Phoenix 4.x and see the same issue - also see attached 
> tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-21 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x-v3.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x-v3.patch, PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$a