[jira] [Created] (PHOENIX-6164) Rename phoenix4-compat and phoenix5-compat maven artifacts
Istvan Toth created PHOENIX-6164: Summary: Rename phoenix4-compat and phoenix5-compat maven artifacts Key: PHOENIX-6164 URL: https://issues.apache.org/jira/browse/PHOENIX-6164 Project: Phoenix Issue Type: Improvement Components: connectors Affects Versions: connectors-6.0.0 Reporter: Istvan Toth The new connectors build uses compatibility modules. These have the coordinates org.apache.phoenix:phoenix4-compat and org.apache.phoenix:phoenix5-compat These artifacts are connectors-specific, yet this is not apparent from their names. Rename them so that the relation is immediately apparent, to something like phoenix-connectors-phoenix4-compat . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data
[ https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Wang updated PHOENIX-5860: --- Attachment: (was: PHOENIX-5860-4.x.patch) > Throw exception which region is closing or splitting when delete data > - > > Key: PHOENIX-5860 > URL: https://issues.apache.org/jira/browse/PHOENIX-5860 > Project: Phoenix > Issue Type: Bug > Components: core >Affects Versions: 4.13.1, 4.x >Reporter: Chao Wang >Assignee: Chao Wang >Priority: Blocker > Fix For: 4.x > > Attachments: PHOENIX-5860-4.x.patch, > PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Currently delete data is UngroupedAggregateRegionObserver class on server > side, this class check if isRegionClosingOrSplitting is true. when > isRegionClosingOrSplitting is true, will throw new IOException("Temporarily > unable to write from scan because region is closing or splitting"). > when region online , which initialize phoenix CP that > isRegionClosingOrSplitting is false.before region split, region change > isRegionClosingOrSplitting to true.but if region split failed,split will roll > back where not change isRegionClosingOrSplitting to false. after that all > write opration will always throw exception which is Temporarily unable to > write from scan because region is closing or splitting。 > so we should change isRegionClosingOrSplitting to false when region > preRollBackSplit in UngroupedAggregateRegionObserver class。 > A simple test where a data table split failed, then roll back success.but > delete data always throw exception. > # create data table > # bulkload data for this table > # alter hbase-server code, which region split will throw exception , then > rollback. > # use hbase shell , split region > # view regionserver log, where region split failed, and then rollback > success. > # user phoenix sqlline.py for delete data, which will throw exption > Caused by: java.io.IOException: Temporarily unable to write from scan > because region is closing or splitting Caused by: java.io.IOException: > Temporarily unable to write from scan because region is closing or splitting > at > org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082) > ... 5 more > at > org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) > at > org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548) > at > org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50) > at > org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97) > at > org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117) > at > org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64) > at > org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) > at > org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293) > at > org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249) > at scala.collection.Iterator$class.foreach(Iterator.scala:893) at > org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.appl
[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data
[ https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Wang updated PHOENIX-5860: --- Attachment: PHOENIX-5860-4.x.patch > Throw exception which region is closing or splitting when delete data > - > > Key: PHOENIX-5860 > URL: https://issues.apache.org/jira/browse/PHOENIX-5860 > Project: Phoenix > Issue Type: Bug > Components: core >Affects Versions: 4.13.1, 4.x >Reporter: Chao Wang >Assignee: Chao Wang >Priority: Blocker > Fix For: 4.x > > Attachments: PHOENIX-5860-4.x.patch, > PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Currently delete data is UngroupedAggregateRegionObserver class on server > side, this class check if isRegionClosingOrSplitting is true. when > isRegionClosingOrSplitting is true, will throw new IOException("Temporarily > unable to write from scan because region is closing or splitting"). > when region online , which initialize phoenix CP that > isRegionClosingOrSplitting is false.before region split, region change > isRegionClosingOrSplitting to true.but if region split failed,split will roll > back where not change isRegionClosingOrSplitting to false. after that all > write opration will always throw exception which is Temporarily unable to > write from scan because region is closing or splitting。 > so we should change isRegionClosingOrSplitting to false when region > preRollBackSplit in UngroupedAggregateRegionObserver class。 > A simple test where a data table split failed, then roll back success.but > delete data always throw exception. > # create data table > # bulkload data for this table > # alter hbase-server code, which region split will throw exception , then > rollback. > # use hbase shell , split region > # view regionserver log, where region split failed, and then rollback > success. > # user phoenix sqlline.py for delete data, which will throw exption > Caused by: java.io.IOException: Temporarily unable to write from scan > because region is closing or splitting Caused by: java.io.IOException: > Temporarily unable to write from scan because region is closing or splitting > at > org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082) > ... 5 more > at > org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) > at > org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548) > at > org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50) > at > org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97) > at > org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117) > at > org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64) > at > org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) > at > org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293) > at > org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249) > at scala.collection.Iterator$class.foreach(Iterator.scala:893) at > org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala
[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data
[ https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Wang updated PHOENIX-5860: --- Attachment: (was: PHOENIX-5860-4.x-v2.patch) > Throw exception which region is closing or splitting when delete data > - > > Key: PHOENIX-5860 > URL: https://issues.apache.org/jira/browse/PHOENIX-5860 > Project: Phoenix > Issue Type: Bug > Components: core >Affects Versions: 4.13.1, 4.x >Reporter: Chao Wang >Assignee: Chao Wang >Priority: Blocker > Fix For: 4.x > > Attachments: PHOENIX-5860-4.x.patch, > PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Currently delete data is UngroupedAggregateRegionObserver class on server > side, this class check if isRegionClosingOrSplitting is true. when > isRegionClosingOrSplitting is true, will throw new IOException("Temporarily > unable to write from scan because region is closing or splitting"). > when region online , which initialize phoenix CP that > isRegionClosingOrSplitting is false.before region split, region change > isRegionClosingOrSplitting to true.but if region split failed,split will roll > back where not change isRegionClosingOrSplitting to false. after that all > write opration will always throw exception which is Temporarily unable to > write from scan because region is closing or splitting。 > so we should change isRegionClosingOrSplitting to false when region > preRollBackSplit in UngroupedAggregateRegionObserver class。 > A simple test where a data table split failed, then roll back success.but > delete data always throw exception. > # create data table > # bulkload data for this table > # alter hbase-server code, which region split will throw exception , then > rollback. > # use hbase shell , split region > # view regionserver log, where region split failed, and then rollback > success. > # user phoenix sqlline.py for delete data, which will throw exption > Caused by: java.io.IOException: Temporarily unable to write from scan > because region is closing or splitting Caused by: java.io.IOException: > Temporarily unable to write from scan because region is closing or splitting > at > org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082) > ... 5 more > at > org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) > at > org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548) > at > org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50) > at > org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97) > at > org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117) > at > org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64) > at > org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) > at > org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293) > at > org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249) > at scala.collection.Iterator$class.foreach(Iterator.scala:893) at > org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.a
[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data
[ https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Wang updated PHOENIX-5860: --- Attachment: PHOENIX-5860-4.x-v2.patch > Throw exception which region is closing or splitting when delete data > - > > Key: PHOENIX-5860 > URL: https://issues.apache.org/jira/browse/PHOENIX-5860 > Project: Phoenix > Issue Type: Bug > Components: core >Affects Versions: 4.13.1, 4.x >Reporter: Chao Wang >Assignee: Chao Wang >Priority: Blocker > Fix For: 4.x > > Attachments: PHOENIX-5860-4.x-v2.patch, PHOENIX-5860-4.x.patch, > PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Currently delete data is UngroupedAggregateRegionObserver class on server > side, this class check if isRegionClosingOrSplitting is true. when > isRegionClosingOrSplitting is true, will throw new IOException("Temporarily > unable to write from scan because region is closing or splitting"). > when region online , which initialize phoenix CP that > isRegionClosingOrSplitting is false.before region split, region change > isRegionClosingOrSplitting to true.but if region split failed,split will roll > back where not change isRegionClosingOrSplitting to false. after that all > write opration will always throw exception which is Temporarily unable to > write from scan because region is closing or splitting。 > so we should change isRegionClosingOrSplitting to false when region > preRollBackSplit in UngroupedAggregateRegionObserver class。 > A simple test where a data table split failed, then roll back success.but > delete data always throw exception. > # create data table > # bulkload data for this table > # alter hbase-server code, which region split will throw exception , then > rollback. > # use hbase shell , split region > # view regionserver log, where region split failed, and then rollback > success. > # user phoenix sqlline.py for delete data, which will throw exption > Caused by: java.io.IOException: Temporarily unable to write from scan > because region is closing or splitting Caused by: java.io.IOException: > Temporarily unable to write from scan because region is closing or splitting > at > org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245) > at > org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881) > at > org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082) > ... 5 more > at > org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) > at > org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548) > at > org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50) > at > org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97) > at > org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117) > at > org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64) > at > org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39) > at > org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) > at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293) > at > org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249) > at scala.collection.Iterator$class.foreach(Iterator.scala:893) at > org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249) > at > com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$an
[jira] [Updated] (PHOENIX-6159) Phoenix-pherf writes the result file even disableRuntimeResult flag is true
[ https://issues.apache.org/jira/browse/PHOENIX-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xinyi Yan updated PHOENIX-6159: --- Description: The ([MultiThreadedRunner|#L109])] writes the result to the file without check the writeRuntimeResults value. I'm not sure if anyone is using this [disableRuntimeResult param in Pherf.java|https://github.com/apache/phoenix/blob/master/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java#L156], which is not presenting what it indicates. I prefer to fix the behavior and match the meaning of disableRuntimeResult. was: The ResultManager has a logic to check writeRuntimeResults and creates defaultHandler or minimalHandler([ResultManager|[https://github.com/apache/phoenix/blob/master/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/ResultManager.java#L69])], but it doesn't represent the meaning of the writeRuntimeResults. If writeRuntimeResults is false, it should not add any ResultHandler for later use. The ([MultiThreadedRunner|[https://github.com/apache/phoenix/blob/master/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/MultiThreadedRunner.java#L109])] writes the result to the file without check the writeRuntimeResults value. I'm not sure if anyone is using this [disableRuntimeResult param in Pherf.java|https://github.com/apache/phoenix/blob/master/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java#L156], which is not presenting what it indicates. I prefer to fix the behavior and match the meaning of disableRuntimeResult. > Phoenix-pherf writes the result file even disableRuntimeResult flag is true > --- > > Key: PHOENIX-6159 > URL: https://issues.apache.org/jira/browse/PHOENIX-6159 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.15.0 >Reporter: Xinyi Yan >Priority: Major > Attachments: PHOENIX-6159.patch > > > The ([MultiThreadedRunner|#L109])] writes the result to the file without > check the writeRuntimeResults value. > I'm not sure if anyone is using this [disableRuntimeResult param in > Pherf.java|https://github.com/apache/phoenix/blob/master/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java#L156], > which is not presenting what it indicates. I prefer to fix the behavior and > match the meaning of disableRuntimeResult. > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6159) Phoenix-pherf writes the result file even disableRuntimeResult flag is true
[ https://issues.apache.org/jira/browse/PHOENIX-6159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xinyi Yan updated PHOENIX-6159: --- Attachment: PHOENIX-6159.patch > Phoenix-pherf writes the result file even disableRuntimeResult flag is true > --- > > Key: PHOENIX-6159 > URL: https://issues.apache.org/jira/browse/PHOENIX-6159 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.15.0 >Reporter: Xinyi Yan >Priority: Major > Attachments: PHOENIX-6159.patch > > > The ResultManager has a logic to check writeRuntimeResults and creates > defaultHandler or > minimalHandler([ResultManager|[https://github.com/apache/phoenix/blob/master/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/ResultManager.java#L69])], > but it doesn't represent the meaning of the writeRuntimeResults. If > writeRuntimeResults is false, it should not add any ResultHandler for later > use. > The > ([MultiThreadedRunner|[https://github.com/apache/phoenix/blob/master/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/MultiThreadedRunner.java#L109])] > writes the result to the file without check the writeRuntimeResults value. > I'm not sure if anyone is using this [disableRuntimeResult param in > Pherf.java|https://github.com/apache/phoenix/blob/master/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java#L156], > which is not presenting what it indicates. I prefer to fix the behavior and > match the meaning of disableRuntimeResult. > > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6153) Table Map Reduce job after a Snapshot based job fails with CorruptedSnapshotException
[ https://issues.apache.org/jira/browse/PHOENIX-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chinmay Kulkarni updated PHOENIX-6153: -- Fix Version/s: 5.1.0 > Table Map Reduce job after a Snapshot based job fails with > CorruptedSnapshotException > - > > Key: PHOENIX-6153 > URL: https://issues.apache.org/jira/browse/PHOENIX-6153 > Project: Phoenix > Issue Type: Bug > Components: core >Affects Versions: 4.15.0, 4.14.3, master >Reporter: Saksham Gangwar >Assignee: Saksham Gangwar >Priority: Major > Fix For: 5.1.0, 4.16.0 > > Attachments: PHOENIX-6153.master.v1.patch, > PHOENIX-6153.master.v2.patch, PHOENIX-6153.master.v3.patch, > PHOENIX-6153.master.v4.patch, PHOENIX-6153.master.v5.patch > > > Different MR job requests which reach [MapReduceParallelScanGrouper > getRegionBoundaries|https://github.com/apache/phoenix/blob/f9e304754bad886344a856dd2565e3f24e345ed2/phoenix-core/src/main/java/org/apache/phoenix/iterate/MapReduceParallelScanGrouper.java#L65] > we currently make use of shared configuration among jobs to figure out > snapshot names. > Example jobs' sequence: first two jobs work over snapshot and the third job > over a regular table. > Prininting hashcode of objects when entering: > [https://github.com/apache/phoenix/blob/f9e304754bad886344a856dd2565e3f24e345ed2/phoenix-core/src/main/java/org/apache/phoenix/iterate/MapReduceParallelScanGrouper.java#L65] > *Job 1:* (over snapshot of *ABC_TABLE_1* and is successful) > context.getConnection(): 521093916 > ConnectionQueryServices: 1772519705 > *Configuration conf: 813285994* > conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY):*ABC_TABLE_1* > > *Job 2:* (over snapshot of *ABC_TABLE_2* and is successful) > context.getConnection(): 1928017473 > ConnectionQueryServices: 961279422 > *Configuration conf: 813285994* > conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY): *ABC_TABLE_2* > > *Job 3:* (over the table *ABC_TABLE_3* but fails with > CorruptedSnapshotException while it got nothing to do with snapshot) > context.getConnection(): 28889670 > ConnectionQueryServices: 424389847 > *Configuration: 813285994* > conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY): *ABC_TABLE_2* > > Exception which we get: > [2020:08:18 20:56:17.409] [MigrationRetryPoller-Executor-1] [ERROR] > [c.s.hgrate.mapreduce.MapReduceImpl] - Error submitting M/R job for Job 3 > java.lang.RuntimeException: > org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Couldn't read > snapshot info > from:hdfs://.../hbase/.hbase-snapshot/ABC_TABLE_2_1597687413477/.snapshotinfo > at > org.apache.phoenix.iterate.MapReduceParallelScanGrouper.getRegionBoundaries(MapReduceParallelScanGrouper.java:81) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:541) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:893) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:641) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:511) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278) > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:367) > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:218) > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:213) > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.mapreduce.PhoenixInputFormat.setupParallelScansWithScanGrouper(PhoenixInputFormat.java:252) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > a
[jira] [Updated] (PHOENIX-6153) Table Map Reduce job after a Snapshot based job fails with CorruptedSnapshotException
[ https://issues.apache.org/jira/browse/PHOENIX-6153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chinmay Kulkarni updated PHOENIX-6153: -- Affects Version/s: (was: 4.x) 4.15.0 > Table Map Reduce job after a Snapshot based job fails with > CorruptedSnapshotException > - > > Key: PHOENIX-6153 > URL: https://issues.apache.org/jira/browse/PHOENIX-6153 > Project: Phoenix > Issue Type: Bug > Components: core >Affects Versions: 4.15.0, 4.14.3, master >Reporter: Saksham Gangwar >Assignee: Saksham Gangwar >Priority: Major > Fix For: 4.16.0 > > Attachments: PHOENIX-6153.master.v1.patch, > PHOENIX-6153.master.v2.patch, PHOENIX-6153.master.v3.patch, > PHOENIX-6153.master.v4.patch, PHOENIX-6153.master.v5.patch > > > Different MR job requests which reach [MapReduceParallelScanGrouper > getRegionBoundaries|https://github.com/apache/phoenix/blob/f9e304754bad886344a856dd2565e3f24e345ed2/phoenix-core/src/main/java/org/apache/phoenix/iterate/MapReduceParallelScanGrouper.java#L65] > we currently make use of shared configuration among jobs to figure out > snapshot names. > Example jobs' sequence: first two jobs work over snapshot and the third job > over a regular table. > Prininting hashcode of objects when entering: > [https://github.com/apache/phoenix/blob/f9e304754bad886344a856dd2565e3f24e345ed2/phoenix-core/src/main/java/org/apache/phoenix/iterate/MapReduceParallelScanGrouper.java#L65] > *Job 1:* (over snapshot of *ABC_TABLE_1* and is successful) > context.getConnection(): 521093916 > ConnectionQueryServices: 1772519705 > *Configuration conf: 813285994* > conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY):*ABC_TABLE_1* > > *Job 2:* (over snapshot of *ABC_TABLE_2* and is successful) > context.getConnection(): 1928017473 > ConnectionQueryServices: 961279422 > *Configuration conf: 813285994* > conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY): *ABC_TABLE_2* > > *Job 3:* (over the table *ABC_TABLE_3* but fails with > CorruptedSnapshotException while it got nothing to do with snapshot) > context.getConnection(): 28889670 > ConnectionQueryServices: 424389847 > *Configuration: 813285994* > conf.get(PhoenixConfigurationUtil.SNAPSHOT_NAME_KEY): *ABC_TABLE_2* > > Exception which we get: > [2020:08:18 20:56:17.409] [MigrationRetryPoller-Executor-1] [ERROR] > [c.s.hgrate.mapreduce.MapReduceImpl] - Error submitting M/R job for Job 3 > java.lang.RuntimeException: > org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: Couldn't read > snapshot info > from:hdfs://.../hbase/.hbase-snapshot/ABC_TABLE_2_1597687413477/.snapshotinfo > at > org.apache.phoenix.iterate.MapReduceParallelScanGrouper.getRegionBoundaries(MapReduceParallelScanGrouper.java:81) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:541) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:893) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:641) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.BaseResultIterators.(BaseResultIterators.java:511) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.iterate.ParallelIterators.(ParallelIterators.java:62) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278) > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:367) > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:218) > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:213) > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT] > at > org.apache.phoenix.mapreduce.PhoenixInputFormat.setupParallelScansWithScanGrouper(PhoenixInputFormat.java:252) > > ~[phoenix-core-4.14.3-hbase-1.6-sfdc-1.0.9-SNAPSHOT.jar:4.14.3
[jira] [Created] (PHOENIX-6163) Move CI to ASF Jenkins for connectors
Istvan Toth created PHOENIX-6163: Summary: Move CI to ASF Jenkins for connectors Key: PHOENIX-6163 URL: https://issues.apache.org/jira/browse/PHOENIX-6163 Project: Phoenix Issue Type: Sub-task Components: connectors Affects Versions: connectors-6.0.0 Reporter: Istvan Toth Assignee: Istvan Toth -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-6147) Copy Github PR discussions to JIRA
[ https://issues.apache.org/jira/browse/PHOENIX-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth resolved PHOENIX-6147. -- Fix Version/s: 4.16.0 5.1.0 Resolution: Fixed It's working. If anyone thinks it's too verbose, then please discuss it on the dev list. > Copy Github PR discussions to JIRA > -- > > Key: PHOENIX-6147 > URL: https://issues.apache.org/jira/browse/PHOENIX-6147 > Project: Phoenix > Issue Type: Wish >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 5.1.0, 4.16.0 > > > A lot of discussion on patched happens on GitHub PRs. > While the GitHub PR interface is superior to the Jira for this purpose, it > means that we are missing a lot of information from the JIRA. > Try to set up a link, so that GitHub PR conversations are copied back to the > corresponding JIRA. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6162) Apply PHOENIX-5594 to the phoenix-queryserver repo
Istvan Toth created PHOENIX-6162: Summary: Apply PHOENIX-5594 to the phoenix-queryserver repo Key: PHOENIX-6162 URL: https://issues.apache.org/jira/browse/PHOENIX-6162 Project: Phoenix Issue Type: Bug Components: queryserver Affects Versions: queryserver-6.0.0 Reporter: Istvan Toth Assignee: Istvan Toth PHOENIX-5594 fixes an umask issue. However, the fix was applied to the queryserver.py in the core repo, which is not used, and has been removed since. Port the fix to the script in the actual queryserver repo. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5594) Different permission of phoenix-*-queryserver.log from umask
[ https://issues.apache.org/jira/browse/PHOENIX-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth reassigned PHOENIX-5594: Assignee: Toshihiro Suzuki (was: Istvan Toth) > Different permission of phoenix-*-queryserver.log from umask > > > Key: PHOENIX-5594 > URL: https://issues.apache.org/jira/browse/PHOENIX-5594 > Project: Phoenix > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Toshihiro Suzuki >Priority: Major > Fix For: 4.15.0, 5.1.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The permission of phoenix-*-queryserver.log is different from umask we set. > For example, when we set umask to 077, the permission of > phoenix-*-queryserver.log should be 600, but it's 666: > {code} > $ umask 077 > $ /bin/queryserver.py start > starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log > $ ll /var/log/hbase/phoenix* > -rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log > -rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out > {code} > It looks like the permission of phoenix-*-queryserver.out is correct (600). > queryserver.py opens QueryServer process as a sub process but it looks like > the umask is not inherited. I think we need to inherit the umask to the sub > process. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5594) Different permission of phoenix-*-queryserver.log from umask
[ https://issues.apache.org/jira/browse/PHOENIX-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth reassigned PHOENIX-5594: Assignee: Istvan Toth (was: Toshihiro Suzuki) > Different permission of phoenix-*-queryserver.log from umask > > > Key: PHOENIX-5594 > URL: https://issues.apache.org/jira/browse/PHOENIX-5594 > Project: Phoenix > Issue Type: Bug >Reporter: Toshihiro Suzuki >Assignee: Istvan Toth >Priority: Major > Fix For: 4.15.0, 5.1.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The permission of phoenix-*-queryserver.log is different from umask we set. > For example, when we set umask to 077, the permission of > phoenix-*-queryserver.log should be 600, but it's 666: > {code} > $ umask 077 > $ /bin/queryserver.py start > starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log > $ ll /var/log/hbase/phoenix* > -rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log > -rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out > {code} > It looks like the permission of phoenix-*-queryserver.out is correct (600). > queryserver.py opens QueryServer process as a sub process but it looks like > the umask is not inherited. I think we need to inherit the umask to the sub > process. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6161) Update the building part of documentation for Phoenix Connectors
Richard Antal created PHOENIX-6161: -- Summary: Update the building part of documentation for Phoenix Connectors Key: PHOENIX-6161 URL: https://issues.apache.org/jira/browse/PHOENIX-6161 Project: Phoenix Issue Type: Task Reporter: Richard Antal Assignee: Richard Antal -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-6113) Update website for building Phoenix
[ https://issues.apache.org/jira/browse/PHOENIX-6113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth reassigned PHOENIX-6113: Assignee: Istvan Toth (was: Richard Antal) > Update website for building Phoenix > --- > > Key: PHOENIX-6113 > URL: https://issues.apache.org/jira/browse/PHOENIX-6113 > Project: Phoenix > Issue Type: Task >Reporter: Richard Antal >Assignee: Istvan Toth >Priority: Major > Attachments: HOENIX-6113.docs.v1.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005)