[jira] [Updated] (PHOENIX-5032) add Apache Yetus to Phoenix

2020-09-09 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-5032:
-
Attachment: phoenix-5032.4.x.v1.patch

> add Apache Yetus to Phoenix
> ---
>
> Key: PHOENIX-5032
> URL: https://issues.apache.org/jira/browse/PHOENIX-5032
> Project: Phoenix
>  Issue Type: Task
>Reporter: Artem Ervits
>Assignee: Istvan Toth
>Priority: Major
> Attachments: PHOENIX-5032.master.v1.patch, phoenix-5032.4.x.v1.patch
>
>
> Spoke with [~elserj], will benefit greatly from Yetus.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6010) Create phoenix-thirdparty, and consume guava through it

2020-09-09 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6010:
-
Attachment: PHOENIX-6010.master.v8.patch

> Create phoenix-thirdparty, and consume guava through it
> ---
>
> Key: PHOENIX-6010
> URL: https://issues.apache.org/jira/browse/PHOENIX-6010
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core, omid, tephra
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Attachments: PHOENIX-6010.master.v1.patch, 
> PHOENIX-6010.master.v2.patch, PHOENIX-6010.master.v3.patch, 
> PHOENIX-6010.master.v4.patch, PHOENIX-6010.master.v5.patch, 
> PHOENIX-6010.master.v6.patch, PHOENIX-6010.master.v7.patch, 
> PHOENIX-6010.master.v8.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We have long-standing and well-documented problems with Guava, just like the 
> rest of the Hadoop components.
> Adopt the solution used by HBase:
>  * create phoenix-thirdparty repo
>  * create a pre-shaded phoenix-shaded-guava artifact in it
>  * Use the pre-shaded Guava in every phoenix component
> The advantages are well-known, but to name a few:
>  * Phoenix will work with Hadoop 3.1.3+
>  * One less CVE in our direct dependencies
>  * No more conflict with our consumer's Guava versions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5712) Got SYSCAT ILLEGAL_DATA exception after created tenant index on view

2020-09-09 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5712:
---
Attachment: 5712-WIP.txt

> Got SYSCAT  ILLEGAL_DATA exception after created tenant index on view
> -
>
> Key: PHOENIX-5712
> URL: https://issues.apache.org/jira/browse/PHOENIX-5712
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Xinyi Yan
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: 5712-WIP.txt, 5712-test.txt, t.txt
>
>
> repo
> //create a multi-tenant table on global connection
> CREATE TABLE A (TENANT_ID CHAR(15) NOT NULL, ID CHAR(3) NOT NULL, NUM BIGINT 
> CONSTRAINT PK PRIMARY KEY (TENANT_ID, ID)) MULTI_TENANT = true;
> // create view and index on tenant connection
> CREATE VIEW A_VIEW AS SELECT * FROM A;
> UPSERT INTO A_VIEW (ID, NUM) VALUES ('A', 1);
> CREATE INDEX A_VIEW_INDEX ON A_VIEW (NUM DESC) INCLUDE (ID);
> // qeury data on global connection 
> SELECT * RFOM SYSTEM.CATALOG;
> {code:java}
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 3 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 3
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:559)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:195)
> at 
> org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
> at 
> org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1011)
> at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:75)
> at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getObject(PhoenixResultSet.java:585)
> at sqlline.Rows$Row.(Rows.java:258)
> at sqlline.BufferedRows.nextList(BufferedRows.java:111)
> at sqlline.BufferedRows.(BufferedRows.java:52)
> at sqlline.SqlLine.print(SqlLine.java:1623)
> at sqlline.Commands.execute(Commands.java:982)
> at sqlline.Commands.sql(Commands.java:906)
> at sqlline.SqlLine.dispatch(SqlLine.java:740)
> at sqlline.SqlLine.begin(SqlLine.java:557)
> at sqlline.SqlLine.start(SqlLine.java:270)
> at sqlline.SqlLine.main(SqlLine.java:201)
> {code}
> I tried to drop the view, and I was able to query the data from the SYSCATA. 
> I tested on 4.x-HBase1.3 and master branch, all branches have the same 
> behavior.
>  
> cc [~kadir] [~gjacoby] [~swaroopa]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-09 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: PHOENIX-5860-4.x.patch

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala

[jira] [Updated] (PHOENIX-5860) Throw exception which region is closing or splitting when delete data

2020-09-09 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-5860:
---
Attachment: (was: PHOENIX-5860.4.x.patch)

> Throw exception which region is closing or splitting when delete data
> -
>
> Key: PHOENIX-5860
> URL: https://issues.apache.org/jira/browse/PHOENIX-5860
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 4.x
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Blocker
> Fix For: 4.x
>
> Attachments: PHOENIX-5860-4.x.patch, 
> PHOENIX-5860.4.13.x-HBASE.1.3.x.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently delete data is UngroupedAggregateRegionObserver class  on server 
> side, this class check if isRegionClosingOrSplitting is true. when 
> isRegionClosingOrSplitting is true, will throw new IOException("Temporarily 
> unable to write from scan because region is closing or splitting"). 
> when region online , which initialize phoenix CP that 
> isRegionClosingOrSplitting  is false.before region split, region change  
> isRegionClosingOrSplitting to true.but if region split failed,split will roll 
> back where not change   isRegionClosingOrSplitting  to false. after that all 
> write  opration will always throw exception which is Temporarily unable to 
> write from scan because region is closing or splitting。
> so we should change isRegionClosingOrSplitting   to false  when region 
> preRollBackSplit in UngroupedAggregateRegionObserver class。
> A simple test where a data table split failed, then roll back success.but 
> delete data always throw exception.
>  # create data table 
>  # bulkload data for this table
>  # alter hbase-server code, which region split will throw exception , then 
> rollback.
>  # use hbase shell , split region
>  # view regionserver log, where region split failed, and then rollback 
> success.
>  # user phoenix sqlline.py for delete data, which  will throw exption
>  Caused by: java.io.IOException: Temporarily unable to write from scan 
> because region is closing or splitting Caused by: java.io.IOException: 
> Temporarily unable to write from scan because region is closing or splitting 
> at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:516)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>  at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3082)
>  ... 5 more
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108) 
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:548)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>  at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>  at 
> org.apache.phoenix.compile.DeleteCompiler$2.execute(DeleteCompiler.java:498) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303) 
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295) 
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:200)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2253)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40$$anonfun$apply$19.apply(EcidProcessCommon.scala:2249)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2249)
>  at 
> com.huawei.mds.apps.ecidRepeatProcess.EcidProcessCommon$$anonfun$40.apply(EcidProcessCommon.scala:2243)
>  at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.appl

[jira] [Updated] (PHOENIX-6119) UngroupedAggregateRegionObserver Malformed connection url Error thrown when using a zookeeper quorum

2020-09-09 Thread Kyle R Stehbens (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle R Stehbens updated PHOENIX-6119:
-
Description: 
When using Phoenix with a HBase instance configured with a HA zookeeper quorum 
URL like the following:

hbase.zookeeper.quorum='zk1:2181,zk2:2181,zk3:2181'

Phoenix throws exceptions when trying to collect statistics as follows:
{noformat}
2020-09-09 21:19:45,806 INFO 
[regionserver/regionserver1:16040-shortCompactions-0] util.QueryUtil: Creating 
connection with the jdbc url: 
jdbc:phoenix:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 2020-09-09 21:19:45,808 WARN 
[regionserver/regionserver1:16040-shortCompactions-0] 
coprocessor.UngroupedAggregateRegionObserver: Unable to collect stats for 
test_namespace:test_table
 java.io.IOException: java.sql.SQLException: ERROR 102 (08001): Malformed 
connection url. :zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 at 
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.init(DefaultStatisticsCollector.java:124)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1097)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1082)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:517)
 at org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:498)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
 at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preCompact(UngroupedAggregateRegionObserver.java:1081)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:656)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:652)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithResult.callObserver(CoprocessorHost.java:600)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:636)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperationWithResult(CoprocessorHost.java:614)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompact(RegionCoprocessorHost.java:650)
 at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.postCompactScannerOpen(Compactor.java:288)
 at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:317)
 at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
 at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
 at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1454)
 at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2260)
 at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:616)
 at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:658)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:570)
 at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:195)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.getMalFormedUrlException(PhoenixEmbeddedDriver.java:204)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.create(PhoenixEmbeddedDriver.java:262)
 at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:232)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
 at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
 at java.sql.DriverManager.getConnection(DriverManager.java:664)
 at java.sql.DriverManager.getConnection(DriverManager.java:208)
 at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:422)
 at org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:400)
 at org.apache.phoenix.util.Qu

[jira] [Updated] (PHOENIX-6119) UngroupedAggregateRegionObserver Malformed connection url Error thrown when using a zookeeper quorum

2020-09-09 Thread Kyle R Stehbens (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle R Stehbens updated PHOENIX-6119:
-
Description: 
When using Phoenix with a HBase instance configured with a zookeeper quorum URL 
like the following:

hbase.zookeeper.quorum='zk1:2181,zk2:2181,zk3:2181'

Phoenix throws exceptions when trying to collect statistics as follows:
{noformat}
2020-09-09 21:19:45,806 INFO 
[regionserver/regionserver1:16040-shortCompactions-0] util.QueryUtil: Creating 
connection with the jdbc url: 
jdbc:phoenix:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 2020-09-09 21:19:45,808 WARN 
[regionserver/regionserver1:16040-shortCompactions-0] 
coprocessor.UngroupedAggregateRegionObserver: Unable to collect stats for 
test_namespace:test_table
 java.io.IOException: java.sql.SQLException: ERROR 102 (08001): Malformed 
connection url. :zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 at 
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.init(DefaultStatisticsCollector.java:124)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1097)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1082)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:517)
 at org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:498)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
 at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preCompact(UngroupedAggregateRegionObserver.java:1081)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:656)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:652)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithResult.callObserver(CoprocessorHost.java:600)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:636)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperationWithResult(CoprocessorHost.java:614)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompact(RegionCoprocessorHost.java:650)
 at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.postCompactScannerOpen(Compactor.java:288)
 at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:317)
 at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
 at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
 at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1454)
 at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2260)
 at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:616)
 at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:658)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:570)
 at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:195)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.getMalFormedUrlException(PhoenixEmbeddedDriver.java:204)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.create(PhoenixEmbeddedDriver.java:262)
 at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:232)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
 at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
 at java.sql.DriverManager.getConnection(DriverManager.java:664)
 at java.sql.DriverManager.getConnection(DriverManager.java:208)
 at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:422)
 at org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:400)
 at org.apache.phoenix.util.Query

[jira] [Updated] (PHOENIX-6119) UngroupedAggregateRegionObserver Malformed connection url Error thrown when using a zookeeper quorum

2020-09-09 Thread Kyle R Stehbens (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle R Stehbens updated PHOENIX-6119:
-
Description: 
When using Phoenix with a HBase instance configured with a zookeeper quorum URL 
like the following:

hbase.zookeeper.quorum='zk1:2181,zk2:2181,zk3:2181'

Phoenix throws exceptions when trying to collect statistics as follows:

```

2020-09-09 21:19:45,806 INFO 
[regionserver/regionserver1:16040-shortCompactions-0] util.QueryUtil: Creating 
connection with the jdbc url: 
jdbc:phoenix:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 2020-09-09 21:19:45,808 WARN 
[regionserver/regionserver1:16040-shortCompactions-0] 
coprocessor.UngroupedAggregateRegionObserver: Unable to collect stats for 
test_namespace:test_table
 java.io.IOException: java.sql.SQLException: ERROR 102 (08001): Malformed 
connection url. :zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 at 
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.init(DefaultStatisticsCollector.java:124)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1097)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1082)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:517)
 at org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:498)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
 at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preCompact(UngroupedAggregateRegionObserver.java:1081)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:656)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:652)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithResult.callObserver(CoprocessorHost.java:600)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:636)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperationWithResult(CoprocessorHost.java:614)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompact(RegionCoprocessorHost.java:650)
 at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.postCompactScannerOpen(Compactor.java:288)
 at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:317)
 at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
 at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
 at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1454)
 at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2260)
 at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:616)
 at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:658)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:570)
 at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:195)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.getMalFormedUrlException(PhoenixEmbeddedDriver.java:204)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.create(PhoenixEmbeddedDriver.java:262)
 at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:232)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
 at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
 at java.sql.DriverManager.getConnection(DriverManager.java:664)
 at java.sql.DriverManager.getConnection(DriverManager.java:208)
 at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:422)
 at org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:400)
 at org.apache.phoenix.util.QueryUtil.

[jira] [Created] (PHOENIX-6119) UngroupedAggregateRegionObserver Malformed connection url Error thrown when using a zookeeper quorum

2020-09-09 Thread Kyle R Stehbens (Jira)
Kyle R Stehbens created PHOENIX-6119:


 Summary: UngroupedAggregateRegionObserver Malformed connection url 
Error thrown when using a zookeeper quorum
 Key: PHOENIX-6119
 URL: https://issues.apache.org/jira/browse/PHOENIX-6119
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0
Reporter: Kyle R Stehbens
 Fix For: 5.1.0


When using Phoenix with a HBase instance configured with a zookeeper quorum URL 
like the following:

hbase.zookeeper.quorum='zk1:2181,zk2:2181,zk3:2181'

Phoenix throws exceptions when trying to collect statistics as follows:

```

2020-09-09 21:19:45,806 INFO 
[regionserver/regionserver1:16040-shortCompactions-0] util.QueryUtil: Creating 
connection with the jdbc url: 
jdbc:phoenix:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
2020-09-09 21:19:45,808 WARN 
[regionserver/regionserver1:16040-shortCompactions-0] 
coprocessor.UngroupedAggregateRegionObserver: Unable to collect stats for 
starlink:terminal_slot_metrics
java.io.IOException: java.sql.SQLException: ERROR 102 (08001): Malformed 
connection url. :zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 at 
org.apache.phoenix.schema.stats.DefaultStatisticsCollector.init(DefaultStatisticsCollector.java:124)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1097)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1082)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:517)
 at org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:498)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
 at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
 at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preCompact(UngroupedAggregateRegionObserver.java:1081)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:656)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:652)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithResult.callObserver(CoprocessorHost.java:600)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:636)
 at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperationWithResult(CoprocessorHost.java:614)
 at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompact(RegionCoprocessorHost.java:650)
 at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.postCompactScannerOpen(Compactor.java:288)
 at 
org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:317)
 at 
org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
 at 
org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
 at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1454)
 at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2260)
 at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:616)
 at 
org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:658)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: ERROR 102 (08001): Malformed connection url. 
:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
 at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:570)
 at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:195)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.getMalFormedUrlException(PhoenixEmbeddedDriver.java:204)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver$ConnectionInfo.create(PhoenixEmbeddedDriver.java:262)
 at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:232)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
 at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
 at java.sql.DriverManager.getConnection(DriverManag

[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2020-09-09 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5171:
---
Attachment: 5171-master-v4.patch

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Jaanai Zhang
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: 5171-master-v3.patch, 5171-master-v4.patch, 
> PHOENIX-5171-master-v2.patch, PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by incorrect next cell hint, due to we have skipped the rest of 
> solts that some key ranges contain all values(EVERYTHING_RANGE) in 
> ScanUtil.setKey method. The next cell hint of current case is 
> _kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
> _kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6089) Additional relocations for the 5.1.0 client

2020-09-09 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6089:
---
Attachment: 6089-master.txt

> Additional relocations for the 5.1.0 client
> ---
>
> Key: PHOENIX-6089
> URL: https://issues.apache.org/jira/browse/PHOENIX-6089
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: 6089-master.txt, 6089.txt
>
>
> I just update the Phoenix connector in Presto locally to work with Phoenix 
> 5.1.x.
> Among other things I relocate a bunch of more classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5171) SkipScan incorrectly filters composite primary key which the key range contains all values

2020-09-09 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5171:
---
Attachment: 5171-master-v3.patch

> SkipScan incorrectly filters composite primary key which the key range 
> contains all values
> --
>
> Key: PHOENIX-5171
> URL: https://issues.apache.org/jira/browse/PHOENIX-5171
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Jaanai Zhang
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
> Attachments: 5171-master-v3.patch, PHOENIX-5171-master-v2.patch, 
> PHOENIX-5171-master.patch
>
>
> Running the below SQL:
> {code:sql}
> create table if not exists aiolos(
> vdate varchar,
> tab varchar,
> dev tinyint not null,
> app varchar,
> target varchar,
> channel varchar,
> one varchar,
> two varchar,
> count1 integer,
> count2 integer,
> CONSTRAINT PK PRIMARY KEY (vdate,tab,dev,app,target,channel,one,two));
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,'A004',null,null,2,2);
> upsert into aiolos 
> values('2018-02-14','channel_agg',2,null,null,null,null,null,2,2);
> SELECT * FROM aiolos WHERE dev = 2 AND vdate BETWEEN '2018-02-10' AND 
> '2019-02-19' AND tab = 'channel_agg' and channel='A004';
> {code}
> Throws exception:
> {code:java}
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=2018-02-14\x00channel_agg\x00\x82//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00/1550642992223/Put/vlen=4/seqid=5445463)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
>   at 
> org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:145)
>   at 
> org.apache.hadoop.hbase.filter.FilterList.filterKeyValue(FilterList.java:264)
>   at 
> org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:418)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:557)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:147)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6308)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6459)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6246)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6232)
>   at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:175)
>   ... 8 more
> {code}
> The caused by incorrect next cell hint, due to we have skipped the rest of 
> solts that some key ranges contain all values(EVERYTHING_RANGE) in 
> ScanUtil.setKey method. The next cell hint of current case is 
> _kv=2018-02-14\x00channel_agg\x00\x82/0:\x00\x00\x00\x00_, but it shoud be  
> _kv=2018-02-14\x00channel_agg\x00\x82\x00\x00A004_.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5881) Port MaxLookbackAge logic to 5.x

2020-09-09 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-5881:

Attachment: PHOENIX-5881.v4.patch

> Port MaxLookbackAge logic to 5.x
> 
>
> Key: PHOENIX-5881
> URL: https://issues.apache.org/jira/browse/PHOENIX-5881
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5881.v1.patch, PHOENIX-5881.v2.patch, 
> PHOENIX-5881.v3.patch, PHOENIX-5881.v4.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> PHOENIX-5645 wasn't included in the master (5.x) branch because an HBase 2.x 
> change prevented the logic from being useful in the case of deletes, since 
> HBase 2.x no longer allows us to show deleted cells on an SCN query before 
> the point of deletion. Unfortunately, PHOENIX-5645 wound up requiring a lot 
> of follow-up work in the IndexTool and IndexScrutinyTool to deal with its 
> implications, and because of that, the 4.x and 5.x codebases around indexes 
> have diverged a good bit. 
> This work item is to get them back in sync, even though the behavior in the 
> face of deletes will be somewhat different, and so most likely some tests 
> will have to be changed or Ignored. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[VOTE] Release of phoenixdb 1.0.0 RC0

2020-09-09 Thread Istvan Toth
Hello Everyone,

This is a call for a vote on phoenixdb 1.0.0 RC0.

PhoenixDB is native Python driver for accessing Phoenix via Phoenix Query
Server.

This version is the first version to be released by Apache Phoenix Project,
and contains the following improvements compared to the previous 0.7
release by the original author.

- Replaced bundled requests_kerberos with request_gssapi library
- Use default SPNEGO Auth settings from request_gssapi
- Refactored authentication code
- Added support for specifying server certificate
- Added support for BASIC and DIGEST authentication
- Fixed HTTP error parsing
- Added transaction support
- Added list support
- Rewritten type handling
- Refactored test suite
- Removed shell example, as it was python2 only
- Updated documentation
- Added SQLAlchemy dialect
- Implemented Avatica Metadata API
- Misc fixes
- Licensing cleanup

The source release consists of the contents of the python-phoenixdb
directory of the phoenix-queryserver repository.

The source tarball, including signatures, digests, etc can be found at:

https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.0-rc0/src/python-phoenixdb-1.0.0-src.tar.gz
https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.0-rc0/src/python-phoenixdb-1.0.0-src.tar.gz.asc
https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.0-rc0/src/python-phoenixdb-1.0.0-src.tar.gz.sha256
https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.0-rc0/src/python-phoenixdb-1.0.0-src.tar.gz.sha512

Artifacts are signed with my "CODE SIGNING KEY":
825203A70405BC83AECF5F7D97351C1B794433C7

KEYS file available here:
https://dist.apache.org/repos/dist/dev/phoenix/KEYS


The hash and tag to be voted upon:
https://gitbox.apache.org/repos/asf?p=phoenix-queryserver.git;a=commit;h=3360154858e27cabe258dfb33b37ec31ed3bd210
https://gitbox.apache.org/repos/asf?p=phoenix-queryserver.git;a=tag;h=refs/tags/python-phoenixdb-1.0.0-rc0

Vote will be open for at least 72 hours. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team