[jira] [Created] (PHOENIX-6112) Coupling of two classes only use logger

2020-08-27 Thread Chao Wang (Jira)
Chao Wang created PHOENIX-6112:
--

 Summary: Coupling of two classes only use logger
 Key: PHOENIX-6112
 URL: https://issues.apache.org/jira/browse/PHOENIX-6112
 Project: Phoenix
  Issue Type: Improvement
  Components: core
Affects Versions: 4.x, master
Reporter: Chao Wang
Assignee: Chao Wang
 Attachments: image-2020-08-28-14-48-34-990.png

PhoenixConfigurationUtil use BaseResultIterators.logger for print log. I think 
this is inappropriate, Coupling of two classes. geeneral, we print log for 
using local class.

!image-2020-08-28-14-48-34-990.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6111) Jenkins jobs are unable to create new native thread

2020-08-27 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6111:
-
Description: 
Jenkins jobs are randomly failing  with 
{noformat}
java.lang.OutOfMemoryError: unable to create new native thread{noformat}
The ulimit on the slaves is 3, and cannot be increased from within the job.

The typical thread count used by our test suite is 3-4000.

It is not clear yet if this is caused by our thread use spiking, or if a 
parallel job exhausts the thread limit.

  was:
Jenkins jobs are randomly failing  with 
java.lang.OutOfMemoryError: unable to create new native thread
The ulimit on the slaves is 3, and cannot be increased from within the job.

The typical thread count used by our test suite is 3-4000.

It is not clear yet if this is caused by our thread use spiking, or if a 
parallel job exhausts the thread limit.


> Jenkins jobs are unable to create new native thread
> ---
>
> Key: PHOENIX-6111
> URL: https://issues.apache.org/jira/browse/PHOENIX-6111
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Jenkins jobs are randomly failing  with 
> {noformat}
> java.lang.OutOfMemoryError: unable to create new native thread{noformat}
> The ulimit on the slaves is 3, and cannot be increased from within the 
> job.
> The typical thread count used by our test suite is 3-4000.
> It is not clear yet if this is caused by our thread use spiking, or if a 
> parallel job exhausts the thread limit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6111) Jenkins jobs are unable to create new native thread

2020-08-27 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-6111:


Assignee: Istvan Toth

> Jenkins jobs are unable to create new native thread
> ---
>
> Key: PHOENIX-6111
> URL: https://issues.apache.org/jira/browse/PHOENIX-6111
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Jenkins jobs are randomly failing  with 
> java.lang.OutOfMemoryError: unable to create new native thread
> The ulimit on the slaves is 3, and cannot be increased from within the 
> job.
> The typical thread count used by our test suite is 3-4000.
> It is not clear yet if this is caused by our thread use spiking, or if a 
> parallel job exhausts the thread limit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6111) Jenkins jobs are unable to create new native thread

2020-08-27 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6111:


 Summary: Jenkins jobs are unable to create new native thread
 Key: PHOENIX-6111
 URL: https://issues.apache.org/jira/browse/PHOENIX-6111
 Project: Phoenix
  Issue Type: Bug
Reporter: Istvan Toth


Jenkins jobs are randomly failing  with 
java.lang.OutOfMemoryError: unable to create new native thread
The ulimit on the slaves is 3, and cannot be increased from within the job.

The typical thread count used by our test suite is 3-4000.

It is not clear yet if this is caused by our thread use spiking, or if a 
parallel job exhausts the thread limit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6056) Migrate from builds.apache.org by August 15

2020-08-27 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-6056.
--
Resolution: Fixed

Closing this hopefully for the last time.

We still have the running out of processes issue, but I'm not going to lump 
that into this ticket.

> Migrate from builds.apache.org by August 15
> ---
>
> Key: PHOENIX-6056
> URL: https://issues.apache.org/jira/browse/PHOENIX-6056
> Project: Phoenix
>  Issue Type: Task
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
> Attachments: PHOENIX-6056.master.v1.patch, 
> PHOENIX-6056.master.v2.patch, PHOENIX-6056.master.v3.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> {noformat}
> Hi All,
> This NOTICE is for everyone on builds.apache.org. We are migrating to a new
> Cloudbees based Client Master called https://ci-builds.apache.org. The
> migrations of all jobs needs to be done before the switch off date of 15th
> August 2020, so you have a maximum of 4 weeks.
> There is no tool or automated way of migrating your jobs, the
> differences in the platforms, the plugins and the setup makes it impossible
> to do in a safe way. So, you all need to start creating new jobs on
> ci-infra.a.o and then , when you are happy, turn off your old builds on
> builds.a.o.
> There are currently 4 agents over there ready to take jobs, plus a floating
> agent which is shared amongst many masters (more to come). I will migrate
> away 2 more agents from builds.a.o to ci-builds.a.o every few days, and
> will keep an eye of load across both and adjust accordingly.
> If needed, create a ticket on INFRA jira for any issues that crop up, or
> email here on builds@a.o. there may be one or two plugins we need to
> install/tweak etc.
> We will be not using 'Views' at the top level, but rather we will take
> advantage of 'Folders Plus' - each project will get its own Folder and have
> authorisation access to create/edit jobs etc within that folder. I will
> create these folders as you ask for them to start with. This setup allows
> for credentials isolation amongst other benefits, including but not limited
> to exclusive agents (Controlled Agents) for your own use , should you have
> any project targeted donations of agents.
> As with other aspects of the ASF, projects can choose to just enable all
> committers access to their folder, just ask.
> We will re-use builds.apache.org as a CNAME to ci-builds.a.o but will not
> be setting up any forwarding rules or anything like that.
> So, please, get started *now *on this so you can be sure we are all
> completed before the final cutoff date of 15th August 2020.
> Any questions - I expect a few (dozen :) ) - ask away and/or file INFRA
> tickets.
> Hadoop and related projects have their own migration path to follow, same
> cut off date, Cassandra, Beam, CouchDB have already migrated and are doing
> very well in their new homes.
> Lets get going ...
> -- 
> *Gavin McDonald*
> Systems Administrator
> ASF Infrastructure Team{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860.4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860.4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> org.apac

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860.4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860.4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>  at 
> org.apac

[jira] [Updated] (PHOENIX-6093) adding hashcode to phoenix pherf Column class

2020-08-27 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-6093:
---
Attachment: (was: PHOENIX-6093.v2.patch)

> adding hashcode to phoenix pherf Column class
> -
>
> Key: PHOENIX-6093
> URL: https://issues.apache.org/jira/browse/PHOENIX-6093
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-6093.patch, PHOENIX-6093.v2.patch
>
>
> The pherf Column class overrides equals but not hashcode. Adding hashcode so 
> that we can fully support serial upsert in numerical data types. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6093) adding hashcode to phoenix pherf Column class

2020-08-27 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-6093:
---
Attachment: PHOENIX-6093.v2.patch

> adding hashcode to phoenix pherf Column class
> -
>
> Key: PHOENIX-6093
> URL: https://issues.apache.org/jira/browse/PHOENIX-6093
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-6093.patch, PHOENIX-6093.v2.patch
>
>
> The pherf Column class overrides equals but not hashcode. Adding hashcode so 
> that we can fully support serial upsert in numerical data types. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6093) adding hashcode to phoenix pherf Column class

2020-08-27 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-6093:
---
Attachment: (was: PHOENIX-6093.v2.patch)

> adding hashcode to phoenix pherf Column class
> -
>
> Key: PHOENIX-6093
> URL: https://issues.apache.org/jira/browse/PHOENIX-6093
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-6093.patch, PHOENIX-6093.v2.patch
>
>
> The pherf Column class overrides equals but not hashcode. Adding hashcode so 
> that we can fully support serial upsert in numerical data types. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6093) adding hashcode to phoenix pherf Column class

2020-08-27 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-6093:
---
Attachment: PHOENIX-6093.v2.patch

> adding hashcode to phoenix pherf Column class
> -
>
> Key: PHOENIX-6093
> URL: https://issues.apache.org/jira/browse/PHOENIX-6093
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-6093.patch, PHOENIX-6093.v2.patch
>
>
> The pherf Column class overrides equals but not hashcode. Adding hashcode so 
> that we can fully support serial upsert in numerical data types. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6093) adding hashcode to phoenix pherf Column class

2020-08-27 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-6093:
---
Attachment: PHOENIX-6093.v2.patch

> adding hashcode to phoenix pherf Column class
> -
>
> Key: PHOENIX-6093
> URL: https://issues.apache.org/jira/browse/PHOENIX-6093
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-6093.patch, PHOENIX-6093.v2.patch
>
>
> The pherf Column class overrides equals but not hashcode. Adding hashcode so 
> that we can fully support serial upsert in numerical data types. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6110) Disable Permission ITs on HBase 2.1

2020-08-27 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6110:


 Summary: Disable Permission ITs on HBase 2.1
 Key: PHOENIX-6110
 URL: https://issues.apache.org/jira/browse/PHOENIX-6110
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0
Reporter: Istvan Toth
Assignee: Istvan Toth


The permissions tests have been flakey on Hbase 2.1 ever since we started 
supporting it.

As we do not have the problems on 2.2 or 2.3, I am reasonably confident that 
the issue is that permission changes are not synchronous on 2.1 under load.

Since 2.1 is EOL, so there no hope for fixing this issue. Since we do not want 
drop support support for 2.1 yet, (we may want to do so after 5.1 is released), 
we should disable the permission-related flakey tests, so that our test results 
stay relevant.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6107) Discuss speed up of BaseQueryIT

2020-08-27 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6107:
-
Attachment: PHOENIX-6107.master.v2.patch

> Discuss speed up of BaseQueryIT
> ---
>
> Key: PHOENIX-6107
> URL: https://issues.apache.org/jira/browse/PHOENIX-6107
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: 6107-master.txt, 6107-proposal.txt, 
> PHOENIX-6107-stoty.master.v1.patch, PHOENIX-6107.master.v2.patch
>
>
> All 14 tests derived from BaseQueryIT are some of the slowest we have.
> I noticed that all these tests run 7 times and each time create a table and 6 
> indexes.
> So just in terms of setup there are 14*7 = 98 tables created and 14*7*6 = 588 
> indexes created.
> It's not clear to me that that the runtime is justified, especially since we 
> have so many other index ITs.
> I think we can reduce this to run with one global index and one local index, 
> for a repeat of only 3 times, instead of 7. That would benefit all derived 
> test and shave of probably around 50% of the overall Phoenix test runtime.
> I.e. 14*3 = 42 tables, and 14*3*2 = 84 indexes.
> Could even go as far and test with no indexes here.
> Yes, it would potentially reduce coverage. Hence a discussion.
> Thoughts?
> (Marked as "Wish" so that we can discuss)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6107) Discuss speed up of BaseQueryIT

2020-08-27 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-6107:
--

Assignee: Istvan Toth

> Discuss speed up of BaseQueryIT
> ---
>
> Key: PHOENIX-6107
> URL: https://issues.apache.org/jira/browse/PHOENIX-6107
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: 6107-master.txt, 6107-proposal.txt, 
> PHOENIX-6107-stoty.master.v1.patch
>
>
> All 14 tests derived from BaseQueryIT are some of the slowest we have.
> I noticed that all these tests run 7 times and each time create a table and 6 
> indexes.
> So just in terms of setup there are 14*7 = 98 tables created and 14*7*6 = 588 
> indexes created.
> It's not clear to me that that the runtime is justified, especially since we 
> have so many other index ITs.
> I think we can reduce this to run with one global index and one local index, 
> for a repeat of only 3 times, instead of 7. That would benefit all derived 
> test and shave of probably around 50% of the overall Phoenix test runtime.
> I.e. 14*3 = 42 tables, and 14*3*2 = 84 indexes.
> Could even go as far and test with no indexes here.
> Yes, it would potentially reduce coverage. Hence a discussion.
> Thoughts?
> (Marked as "Wish" so that we can discuss)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6107) Discuss speed up of BaseQueryIT

2020-08-27 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6107:
---
Fix Version/s: 4.16.0
   5.1.0

> Discuss speed up of BaseQueryIT
> ---
>
> Key: PHOENIX-6107
> URL: https://issues.apache.org/jira/browse/PHOENIX-6107
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: 6107-master.txt, 6107-proposal.txt, 
> PHOENIX-6107-stoty.master.v1.patch
>
>
> All 14 tests derived from BaseQueryIT are some of the slowest we have.
> I noticed that all these tests run 7 times and each time create a table and 6 
> indexes.
> So just in terms of setup there are 14*7 = 98 tables created and 14*7*6 = 588 
> indexes created.
> It's not clear to me that that the runtime is justified, especially since we 
> have so many other index ITs.
> I think we can reduce this to run with one global index and one local index, 
> for a repeat of only 3 times, instead of 7. That would benefit all derived 
> test and shave of probably around 50% of the overall Phoenix test runtime.
> I.e. 14*3 = 42 tables, and 14*3*2 = 84 indexes.
> Could even go as far and test with no indexes here.
> Yes, it would potentially reduce coverage. Hence a discussion.
> Thoughts?
> (Marked as "Wish" so that we can discuss)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6010) Create phoenix-thirdparty, and consume guava through it

2020-08-27 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6010:
-
Attachment: PHOENIX-6010.master.v5.patch

> Create phoenix-thirdparty, and consume guava through it
> ---
>
> Key: PHOENIX-6010
> URL: https://issues.apache.org/jira/browse/PHOENIX-6010
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core, omid, tephra
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Attachments: PHOENIX-6010.master.v1.patch, 
> PHOENIX-6010.master.v2.patch, PHOENIX-6010.master.v3.patch, 
> PHOENIX-6010.master.v4.patch, PHOENIX-6010.master.v5.patch
>
>
> We have long-standing and well-documented problems with Guava, just like the 
> rest of the Hadoop components.
> Adopt the solution used by HBase:
>  * create phoenix-thirdparty repo
>  * create a pre-shaded phoenix-shaded-guava artifact in it
>  * Use the pre-shaded Guava in every phoenix component
> The advantages are well-known, but to name a few:
>  * Phoenix will work with Hadoop 3.1.3+
>  * One less CVE in our direct dependencies
>  * No more conflict with our consumer's Guava versions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6107) Discuss speed up of BaseQueryIT

2020-08-27 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6107:
-
Attachment: PHOENIX-6107-stoty.master.v1.patch

> Discuss speed up of BaseQueryIT
> ---
>
> Key: PHOENIX-6107
> URL: https://issues.apache.org/jira/browse/PHOENIX-6107
> Project: Phoenix
>  Issue Type: Wish
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6107-master.txt, 6107-proposal.txt, 
> PHOENIX-6107-stoty.master.v1.patch
>
>
> All 14 tests derived from BaseQueryIT are some of the slowest we have.
> I noticed that all these tests run 7 times and each time create a table and 6 
> indexes.
> So just in terms of setup there are 14*7 = 98 tables created and 14*7*6 = 588 
> indexes created.
> It's not clear to me that that the runtime is justified, especially since we 
> have so many other index ITs.
> I think we can reduce this to run with one global index and one local index, 
> for a repeat of only 3 times, instead of 7. That would benefit all derived 
> test and shave of probably around 50% of the overall Phoenix test runtime.
> I.e. 14*3 = 42 tables, and 14*3*2 = 84 indexes.
> Could even go as far and test with no indexes here.
> Yes, it would potentially reduce coverage. Hence a discussion.
> Thoughts?
> (Marked as "Wish" so that we can discuss)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860.4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:798)
>

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-08-27 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860-4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch, 
> PHOENIX-5860.4.x.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.s