[jira] [Commented] (PHOENIX-4716) ParameterizedTransactionIT is failing in 0.98 branch

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457349#comment-16457349
 ] 

Hudson commented on PHOENIX-4716:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #110 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/110/])
PHOENIX-4716 ParameterizedTransactionIT is failing in 0.98 branch (jtaylor: rev 
c08d8e6b74d8ee2a5006e05296e301587b3900de)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/tx/ParameterizedTransactionIT.java


> ParameterizedTransactionIT is failing in 0.98 branch
> 
>
> Key: PHOENIX-4716
> URL: https://issues.apache.org/jira/browse/PHOENIX-4716
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4716.patch
>
>
> ParameterizedTransactionIT.testNonTxToTxTable is failing after commit for 
> PHOENIX-4278: 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=514f576c1528df43654362bd1519f4a2082ab80f
> Error is:
> {code}
> [ERROR] 
> testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)
>   Time elapsed: 6.829 s  <<< ERROR!
> org.apache.phoenix.schema.IllegalDataException: 
> java.net.SocketTimeoutException: callTimeout=120, callDuration=9000101: 
> row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: java.net.SocketTimeoutException: callTimeout=120, 
> callDuration=9000101: row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4716) ParameterizedTransactionIT is failing in 0.98 branch

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457317#comment-16457317
 ] 

Hudson commented on PHOENIX-4716:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1873 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1873/])
PHOENIX-4716 ParameterizedTransactionIT is failing in 0.98 branch (jtaylor: rev 
6a893aeb9bdfe7b355a3ab3b6e7ac17c9e1ce082)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/tx/ParameterizedTransactionIT.java


> ParameterizedTransactionIT is failing in 0.98 branch
> 
>
> Key: PHOENIX-4716
> URL: https://issues.apache.org/jira/browse/PHOENIX-4716
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4716.patch
>
>
> ParameterizedTransactionIT.testNonTxToTxTable is failing after commit for 
> PHOENIX-4278: 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=514f576c1528df43654362bd1519f4a2082ab80f
> Error is:
> {code}
> [ERROR] 
> testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)
>   Time elapsed: 6.829 s  <<< ERROR!
> org.apache.phoenix.schema.IllegalDataException: 
> java.net.SocketTimeoutException: callTimeout=120, callDuration=9000101: 
> row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: java.net.SocketTimeoutException: callTimeout=120, 
> callDuration=9000101: row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4716) ParameterizedTransactionIT is failing in 0.98 branch

2018-04-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4716.
---
   Resolution: Fixed
Fix Version/s: 5.0.0
   4.14.0

> ParameterizedTransactionIT is failing in 0.98 branch
> 
>
> Key: PHOENIX-4716
> URL: https://issues.apache.org/jira/browse/PHOENIX-4716
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4716.patch
>
>
> ParameterizedTransactionIT.testNonTxToTxTable is failing after commit for 
> PHOENIX-4278: 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=514f576c1528df43654362bd1519f4a2082ab80f
> Error is:
> {code}
> [ERROR] 
> testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)
>   Time elapsed: 6.829 s  <<< ERROR!
> org.apache.phoenix.schema.IllegalDataException: 
> java.net.SocketTimeoutException: callTimeout=120, callDuration=9000101: 
> row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: java.net.SocketTimeoutException: callTimeout=120, 
> callDuration=9000101: row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4715) PartialIndexRebuilderIT tests fail after switching master to HBase 1.4

2018-04-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457276#comment-16457276
 ] 

James Taylor commented on PHOENIX-4715:
---

[~sergey.soldatov] - would you have any spare cycles to try to figure this out? 
Or [~rajeshbabu]?

> PartialIndexRebuilderIT tests fail after switching master to HBase 1.4
> --
>
> Key: PHOENIX-4715
> URL: https://issues.apache.org/jira/browse/PHOENIX-4715
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Priority: Major
>
> I think the 3 test failures in PartialIndexRebuilderIT started happening 
> after we switched master to HBase 1.4  as part of PHOENIX-4076. 
> Maybe [~lhofhansl] or [~apurtell] might have some insight
> {code:java}
> [ERROR] Failures: 
> [ERROR] PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild:230 Expected 
> equality for V1, but null!=11 
> [ERROR] PartialIndexRebuilderIT.testDeleteAndUpsertAfterFailure:347 Expected 
> equality for V2, but null!=1 
> [ERROR] PartialIndexRebuilderIT.testWriteWhileRebuilding:396 Expected 
> equality for V2, but null!=2 
> {code}
> testDeleteAndUpsertAfterFailure and testWriteWhileRebuilding pass for me 
> locally just before PHOENIX-4076 was committed. 
> testConcurrentUpsertsWithRebuild fails with the following exception at the 
> commit before PHOENIX-4076 .
> {code:java}
> 2018-04-27 16:14:48,049 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(1089): 
> IOException during rebuilding: 
> org.apache.hadoop.hbase.exceptions.TimeoutIOException: Timed out waiting for 
> lock for row: 80 00 00 01 80 00 00 00
>   at 
> org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:96)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:421)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:370)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1007)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1003)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3190)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2976)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2918)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.rebuildIndices(UngroupedAggregateRegionObserver.java:1074)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:369)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> 2018-04-27 16:14:48,051 DEBUG 
> [RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
> org.apache.hadoop.hbase.ipc.CallRunner(126): 
> RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069: callId: 1941 
> service: ClientService methodName: Scan size: 40 connection: 127.0.0.1:14017
> org.apache.hadoop.hbase.UnknownScannerException: Throwing 
> UnknownScannerException to reset the client scanner state for clients older 
> than 1.3.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2893)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   

[jira] [Updated] (PHOENIX-4716) ParameterizedTransactionIT is failing in 0.98 branch

2018-04-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4716:
--
Attachment: PHOENIX-4716.patch

> ParameterizedTransactionIT is failing in 0.98 branch
> 
>
> Key: PHOENIX-4716
> URL: https://issues.apache.org/jira/browse/PHOENIX-4716
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4716.patch
>
>
> ParameterizedTransactionIT.testNonTxToTxTable is failing after commit for 
> PHOENIX-4278: 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=514f576c1528df43654362bd1519f4a2082ab80f
> Error is:
> {code}
> [ERROR] 
> testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)
>   Time elapsed: 6.829 s  <<< ERROR!
> org.apache.phoenix.schema.IllegalDataException: 
> java.net.SocketTimeoutException: callTimeout=120, callDuration=9000101: 
> row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: java.net.SocketTimeoutException: callTimeout=120, 
> callDuration=9000101: row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4716) ParameterizedTransactionIT is failing in 0.98 branch

2018-04-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4716:
--
Issue Type: Test  (was: Bug)

> ParameterizedTransactionIT is failing in 0.98 branch
> 
>
> Key: PHOENIX-4716
> URL: https://issues.apache.org/jira/browse/PHOENIX-4716
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>
> ParameterizedTransactionIT.testNonTxToTxTable is failing after commit for 
> PHOENIX-4278: 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=514f576c1528df43654362bd1519f4a2082ab80f
> Error is:
> {code}
> [ERROR] 
> testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)
>   Time elapsed: 6.829 s  <<< ERROR!
> org.apache.phoenix.schema.IllegalDataException: 
> java.net.SocketTimeoutException: callTimeout=120, callDuration=9000101: 
> row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: java.net.SocketTimeoutException: callTimeout=120, 
> callDuration=9000101: row '�' on table 'T59' at 
> region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
> hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region 
> T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
> jtaylor-wsl2,33800,1524870748756
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:748)
>   at 
> org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4715) PartialIndexRebuilderIT tests fail after switching master to HBase 1.4

2018-04-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457270#comment-16457270
 ] 

Andrew Purtell commented on PHOENIX-4715:
-

I was just testing some internal branches and PartialIndexBuilderIT was the 
only one to fail under 1.4 but pass on 1.3. I don't have any particular insight 
into why. 

> PartialIndexRebuilderIT tests fail after switching master to HBase 1.4
> --
>
> Key: PHOENIX-4715
> URL: https://issues.apache.org/jira/browse/PHOENIX-4715
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Priority: Major
>
> I think the 3 test failures in PartialIndexRebuilderIT started happening 
> after we switched master to HBase 1.4  as part of PHOENIX-4076. 
> Maybe [~lhofhansl] or [~apurtell] might have some insight
> {code:java}
> [ERROR] Failures: 
> [ERROR] PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild:230 Expected 
> equality for V1, but null!=11 
> [ERROR] PartialIndexRebuilderIT.testDeleteAndUpsertAfterFailure:347 Expected 
> equality for V2, but null!=1 
> [ERROR] PartialIndexRebuilderIT.testWriteWhileRebuilding:396 Expected 
> equality for V2, but null!=2 
> {code}
> testDeleteAndUpsertAfterFailure and testWriteWhileRebuilding pass for me 
> locally just before PHOENIX-4076 was committed. 
> testConcurrentUpsertsWithRebuild fails with the following exception at the 
> commit before PHOENIX-4076 .
> {code:java}
> 2018-04-27 16:14:48,049 ERROR 
> [RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(1089): 
> IOException during rebuilding: 
> org.apache.hadoop.hbase.exceptions.TimeoutIOException: Timed out waiting for 
> lock for row: 80 00 00 01 80 00 00 00
>   at 
> org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:96)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:421)
>   at 
> org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:370)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1007)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1003)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3190)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2976)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2918)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.rebuildIndices(UngroupedAggregateRegionObserver.java:1074)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:369)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> 2018-04-27 16:14:48,051 DEBUG 
> [RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
> org.apache.hadoop.hbase.ipc.CallRunner(126): 
> RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069: callId: 1941 
> service: ClientService methodName: Scan size: 40 connection: 127.0.0.1:14017
> org.apache.hadoop.hbase.UnknownScannerException: Throwing 
> UnknownScannerException to reset the client scanner state for clients older 
> than 1.3.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2893)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>

[jira] [Created] (PHOENIX-4716) ParameterizedTransactionIT is failing in 0.98 branch

2018-04-27 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4716:
-

 Summary: ParameterizedTransactionIT is failing in 0.98 branch
 Key: PHOENIX-4716
 URL: https://issues.apache.org/jira/browse/PHOENIX-4716
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor


ParameterizedTransactionIT.testNonTxToTxTable is failing after commit for 
PHOENIX-4278: 
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=514f576c1528df43654362bd1519f4a2082ab80f

Error is:
{code}
[ERROR] 
testNonTxToTxTable[TransactionIT_mutable=true,columnEncoded=true](org.apache.phoenix.tx.ParameterizedTransactionIT)
  Time elapsed: 6.829 s  <<< ERROR!
org.apache.phoenix.schema.IllegalDataException: 
java.net.SocketTimeoutException: callTimeout=120, callDuration=9000101: row 
'�' on table 'T59' at 
region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
at 
org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=9000101: row '�' on table 'T59' at 
region=T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf., 
hostname=jtaylor-wsl2,33800,1524870748756, seqNum=1
at 
org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
Caused by: org.apache.hadoop.hbase.NotServingRegionException: 
org.apache.hadoop.hbase.NotServingRegionException: Region 
T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
jtaylor-wsl2,33800,1524870748756
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)

at 
org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException: 
org.apache.hadoop.hbase.NotServingRegionException: Region 
T59,,1524870980142.01ac7dce02ae7a04d7107eb7d5f51edf. is not online on 
jtaylor-wsl2,33800,1524870748756
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2860)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4528)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3246)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32492)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2195)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)

at 
org.apache.phoenix.tx.ParameterizedTransactionIT.testNonTxToTxTable(ParameterizedTransactionIT.java:288)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4715) PartialIndexRebuilderIT tests fail after switching master to HBase 1.4

2018-04-27 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-4715:
---

 Summary: PartialIndexRebuilderIT tests fail after switching master 
to HBase 1.4
 Key: PHOENIX-4715
 URL: https://issues.apache.org/jira/browse/PHOENIX-4715
 Project: Phoenix
  Issue Type: Bug
Reporter: Thomas D'Silva


I think the 3 test failures in PartialIndexRebuilderIT started happening after 
we switched master to HBase 1.4  as part of PHOENIX-4076. 

Maybe [~lhofhansl] or [~apurtell] might have some insight

{code}

[ERROR] Failures: 
[ERROR] PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild:230 Expected 
equality for V1, but null!=11 
[ERROR] PartialIndexRebuilderIT.testDeleteAndUpsertAfterFailure:347 Expected 
equality for V2, but null!=1 
[ERROR] PartialIndexRebuilderIT.testWriteWhileRebuilding:396 Expected equality 
for V2, but null!=2 

{code}

testDeleteAndUpsertAfterFailure and testWriteWhileRebuilding pass for me 
locally just before PHOENIX-4076 was committed. 
testConcurrentUpsertsWithRebuild fails with the following exception at the 
commit before PHOENIX-4076 . 

{code}
2018-04-27 16:14:48,049 ERROR 
[RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(1089): 
IOException during rebuilding: 
org.apache.hadoop.hbase.exceptions.TimeoutIOException: Timed out waiting for 
lock for row: 80 00 00 01 80 00 00 00
at 
org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:96)
at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:421)
at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:370)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1007)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1003)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3190)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2976)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2918)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.rebuildIndices(UngroupedAggregateRegionObserver.java:1074)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:369)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

2018-04-27 16:14:48,051 DEBUG 
[RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
org.apache.hadoop.hbase.ipc.CallRunner(126): 
RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069: callId: 1941 service: 
ClientService methodName: Scan size: 40 connection: 127.0.0.1:14017
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2893)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Timed out 
waiting for lock for row: 80 00 00 01 80 00 00 00
at 
org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:96)
at 
org.apache

[jira] [Updated] (PHOENIX-4715) PartialIndexRebuilderIT tests fail after switching master to HBase 1.4

2018-04-27 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4715:

Description: 
I think the 3 test failures in PartialIndexRebuilderIT started happening after 
we switched master to HBase 1.4  as part of PHOENIX-4076. 

Maybe [~lhofhansl] or [~apurtell] might have some insight
{code:java}
[ERROR] Failures: 
[ERROR] PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild:230 Expected 
equality for V1, but null!=11 
[ERROR] PartialIndexRebuilderIT.testDeleteAndUpsertAfterFailure:347 Expected 
equality for V2, but null!=1 
[ERROR] PartialIndexRebuilderIT.testWriteWhileRebuilding:396 Expected equality 
for V2, but null!=2 

{code}
testDeleteAndUpsertAfterFailure and testWriteWhileRebuilding pass for me 
locally just before PHOENIX-4076 was committed. 
testConcurrentUpsertsWithRebuild fails with the following exception at the 
commit before PHOENIX-4076 .
{code:java}
2018-04-27 16:14:48,049 ERROR 
[RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver(1089): 
IOException during rebuilding: 
org.apache.hadoop.hbase.exceptions.TimeoutIOException: Timed out waiting for 
lock for row: 80 00 00 01 80 00 00 00
at 
org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:96)
at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:421)
at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:370)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1007)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1003)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3190)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2976)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2918)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.rebuildIndices(UngroupedAggregateRegionObserver.java:1074)
at 
org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:369)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:245)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

2018-04-27 16:14:48,051 DEBUG 
[RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069] 
org.apache.hadoop.hbase.ipc.CallRunner(126): 
RpcServer.FifoWFPBQ.default.handler=1,queue=0,port=26069: callId: 1941 service: 
ClientService methodName: Scan size: 40 connection: 127.0.0.1:14017
org.apache.hadoop.hbase.UnknownScannerException: Throwing 
UnknownScannerException to reset the client scanner state for clients older 
than 1.3.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2893)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
Caused by: org.apache.hadoop.hbase.exceptions.TimeoutIOException: Timed out 
waiting for lock for row: 80 00 00 01 80 00 00 00
at 
org.apache.phoenix.hbase.index.LockManager.lockRow(LockManager.java:96)
at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:421)
at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexe

[jira] [Commented] (PHOENIX-4713) ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues is failing on master after PHOENIX-4531

2018-04-27 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457207#comment-16457207
 ] 

Thomas D'Silva commented on PHOENIX-4713:
-

I think the 3 test failures in PartialIndexRebuilderIT started happening after 
we switched master to HBase 1.4  as part of PHOENIX-4076. 
I will create another JIRA for them.

> ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues is failing on 
> master after PHOENIX-4531 
> ---
>
> Key: PHOENIX-4713
> URL: https://issues.apache.org/jira/browse/PHOENIX-4713
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Vincent Poon
>Priority: Major
>
> ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues:214 Expected to 
> find PK in data table: (0,0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4705) Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory()

2018-04-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4705.
---
Resolution: Fixed

> Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory()
> -
>
> Key: PHOENIX-4705
> URL: https://issues.apache.org/jira/browse/PHOENIX-4705
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4705.patch
>
>
> Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory() in 
> Pherf as the latter doesn't compile (at least for me).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4709) Alter split policy in upgrade path for system tables

2018-04-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4709.
---
   Resolution: Fixed
Fix Version/s: 5.0.0
   4.14.0

> Alter split policy in upgrade path for system tables
> 
>
> Key: PHOENIX-4709
> URL: https://issues.apache.org/jira/browse/PHOENIX-4709
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4709_v1.patch
>
>
> With PHOENIX-4700, the split policy would only change for new installations. 
> For existing installations, the schema of system tables only changes in the 
> upgrade path, including for HBase metadata now. Thus we need an ALTER TABLE 
> call in our upgrade path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4710) Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG

2018-04-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4710.
---
   Resolution: Fixed
Fix Version/s: 5.0.0
   4.14.0

> Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG
> ---
>
> Key: PHOENIX-4710
> URL: https://issues.apache.org/jira/browse/PHOENIX-4710
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4710.patch
>
>
> We shouldn't be setting KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG since 
> the table is immutable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4711) Unable to set property on table with VARBINARY as last column

2018-04-27 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4711.
---
   Resolution: Fixed
Fix Version/s: 5.0.0
   4.14.0

> Unable to set property on table with VARBINARY as last column
> -
>
> Key: PHOENIX-4711
> URL: https://issues.apache.org/jira/browse/PHOENIX-4711
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4711_v1.patch
>
>
> Our check for preventing the addition of a column kicks in even when you're 
> not adding a column, but instead are trying to set a property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4710) Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457154#comment-16457154
 ] 

Hudson commented on PHOENIX-4710:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1871 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1871/])
PHOENIX-4710 Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG (jtaylor: 
rev d6d4d7b07f1ca012a5fbe7378e8eaf6b66603923)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionlessQueryServicesImpl.java


> Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG
> ---
>
> Key: PHOENIX-4710
> URL: https://issues.apache.org/jira/browse/PHOENIX-4710
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4710.patch
>
>
> We shouldn't be setting KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG since 
> the table is immutable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4709) Alter split policy in upgrade path for system tables

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457156#comment-16457156
 ] 

Hudson commented on PHOENIX-4709:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1871 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1871/])
PHOENIX-4709 Alter split policy in upgrade path for system tables (jtaylor: rev 
e2ebbf9365d126da8d4ab5968c33b7c09014b74a)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java


> Alter split policy in upgrade path for system tables
> 
>
> Key: PHOENIX-4709
> URL: https://issues.apache.org/jira/browse/PHOENIX-4709
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4709_v1.patch
>
>
> With PHOENIX-4700, the split policy would only change for new installations. 
> For existing installations, the schema of system tables only changes in the 
> upgrade path, including for HBase metadata now. Thus we need an ALTER TABLE 
> call in our upgrade path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4711) Unable to set property on table with VARBINARY as last column

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457155#comment-16457155
 ] 

Hudson commented on PHOENIX-4711:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1871 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1871/])
PHOENIX-4711 Unable to set property on table with VARBINARY as last (jtaylor: 
rev 15ba73cd56a0371b69ec498dec383c66315bedc1)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Unable to set property on table with VARBINARY as last column
> -
>
> Key: PHOENIX-4711
> URL: https://issues.apache.org/jira/browse/PHOENIX-4711
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4711_v1.patch
>
>
> Our check for preventing the addition of a column kicks in even when you're 
> not adding a column, but instead are trying to set a property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4705) Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory()

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457153#comment-16457153
 ] 

Hudson commented on PHOENIX-4705:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1871 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1871/])
PHOENIX-4705 Use XMLInputFactory.newInstance() instead of (jtaylor: rev 
f5f81fc761423d1a3d4b2d43834622c1c6709cad)
* (edit) 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/XMLConfigParser.java
* (edit) 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/impl/XMLResultHandler.java


> Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory()
> -
>
> Key: PHOENIX-4705
> URL: https://issues.apache.org/jira/browse/PHOENIX-4705
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4705.patch
>
>
> Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory() in 
> Pherf as the latter doesn't compile (at least for me).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4704) Presplit index tables when building asynchronously

2018-04-27 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457151#comment-16457151
 ] 

Andrew Purtell commented on PHOENIX-4704:
-

Even a uniform split into a few regions would be an improvement? (And 
subsequent organic splitting would cause region boundaries to move toward the 
ideal.)

> Presplit index tables when building asynchronously
> --
>
> Key: PHOENIX-4704
> URL: https://issues.apache.org/jira/browse/PHOENIX-4704
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vincent Poon
>Priority: Major
>
> For large data tables with many regions, if we build the index asynchronously 
> using the IndexTool, the index table will initial face a hotspot as all data 
> region mappers attempt to write to the sole new index region.  This can 
> potentially lead to the index getting disabled if writes to the index table 
> timeout during this hotspotting.
> We can add an optional step (or perhaps activate it based on the count of 
> regions in the data table) to the IndexTool to first do a MR job to gather 
> stats on the indexed column values, and then attempt to presplit the index 
> table before we do the actual index build MR job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4709) Alter split policy in upgrade path for system tables

2018-04-27 Thread Chinmay Kulkarni (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457142#comment-16457142
 ] 

Chinmay Kulkarni commented on PHOENIX-4709:
---

lgtm +1

> Alter split policy in upgrade path for system tables
> 
>
> Key: PHOENIX-4709
> URL: https://issues.apache.org/jira/browse/PHOENIX-4709
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4709_v1.patch
>
>
> With PHOENIX-4700, the split policy would only change for new installations. 
> For existing installations, the schema of system tables only changes in the 
> upgrade path, including for HBase metadata now. Thus we need an ALTER TABLE 
> call in our upgrade path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4713) ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues is failing on master after PHOENIX-4531

2018-04-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457133#comment-16457133
 ] 

James Taylor commented on PHOENIX-4713:
---

I see 6 failures: 

org.apache.phoenix.end2end.index.PartialIndexRebuilderIT.testWriteWhileRebuilding

org.apache.phoenix.end2end.index.PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild

org.apache.phoenix.end2end.index.PartialIndexRebuilderIT.testDeleteAndUpsertAfterFailure-1
testDeleteAndUpsertAfterFailure [5 times].testDeleteAndUpsertAfterFailure 
[5 times]

org.apache.phoenix.end2end.ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues

org.apache.phoenix.end2end.OrderByIT.testOrderByReverseOptimizationWithNUllsLastBug3491

https://builds.apache.org/job/Phoenix-master/2009/#showFailuresLink



> ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues is failing on 
> master after PHOENIX-4531 
> ---
>
> Key: PHOENIX-4713
> URL: https://issues.apache.org/jira/browse/PHOENIX-4713
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Vincent Poon
>Priority: Major
>
> ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues:214 Expected to 
> find PK in data table: (0,0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4713) ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues is failing on master after PHOENIX-4531

2018-04-27 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457094#comment-16457094
 ] 

Vincent Poon commented on PHOENIX-4713:
---

This is only failing on master with HBase 1.4, seems to be due to some 
difference in HBase 1.4

> ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues is failing on 
> master after PHOENIX-4531 
> ---
>
> Key: PHOENIX-4713
> URL: https://issues.apache.org/jira/browse/PHOENIX-4713
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Vincent Poon
>Priority: Major
>
> ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues:214 Expected to 
> find PK in data table: (0,0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4593) Detect and fail queries that are deemed too expensive

2018-04-27 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457057#comment-16457057
 ] 

Cody Marcel commented on PHOENIX-4593:
--

[~jamestaylor] This is interesting and basically the entire premise of what we 
are using stats for. Do you have thoughts on how this would be designed and 
surfaced to clients?

> Detect and fail queries that are deemed too expensive
> -
>
> Key: PHOENIX-4593
> URL: https://issues.apache.org/jira/browse/PHOENIX-4593
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Priority: Major
>
> Based on a conversation over on PHOENIX-1556, we should have configurable 
> limits for various query operators:
> - max size of client-side order by
> - max size of server-side order by
> - max size of client-side aggregation
> - max size of server-side aggregation
> - max bytes processed for an UPSERT SELECT
> - max rows deleted by DELETE
> Some of these are controlled by the max amount of memory allowed, but this is 
> suboptimal as you end up using cluster resources and then failing at runtime. 
> Ideally, if we had histograms available (PHOENIX-1178), we could detect at 
> compile time if we think the limits will be reached and then disallow them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4705) Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory()

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456999#comment-16456999
 ] 

Hudson commented on PHOENIX-4705:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #108 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/108/])
PHOENIX-4705 Use XMLInputFactory.newInstance() instead of (jtaylor: rev 
dad8019e49b2f946b234e2242875800910bd4723)
* (edit) 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/result/impl/XMLResultHandler.java
* (edit) 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/configuration/XMLConfigParser.java


> Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory()
> -
>
> Key: PHOENIX-4705
> URL: https://issues.apache.org/jira/browse/PHOENIX-4705
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4705.patch
>
>
> Use XMLInputFactory.newInstance() instead of XMLInputFactory.newFactory() in 
> Pherf as the latter doesn't compile (at least for me).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4711) Unable to set property on table with VARBINARY as last column

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457001#comment-16457001
 ] 

Hudson commented on PHOENIX-4711:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #108 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/108/])
PHOENIX-4711 Unable to set property on table with VARBINARY as last (jtaylor: 
rev 4b0feaee2deb3c90c58a4e5a7a726bfbf72466fa)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Unable to set property on table with VARBINARY as last column
> -
>
> Key: PHOENIX-4711
> URL: https://issues.apache.org/jira/browse/PHOENIX-4711
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4711_v1.patch
>
>
> Our check for preventing the addition of a column kicks in even when you're 
> not adding a column, but instead are trying to set a property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4710) Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457000#comment-16457000
 ] 

Hudson commented on PHOENIX-4710:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #108 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/108/])
PHOENIX-4710 Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG (jtaylor: 
rev 45ace77b2fe989be10121f6f0607f361a8d037b4)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionlessQueryServicesImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java


> Don't set KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG
> ---
>
> Key: PHOENIX-4710
> URL: https://issues.apache.org/jira/browse/PHOENIX-4710
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4710.patch
>
>
> We shouldn't be setting KEEP_DELETED_CELLS or VERSIONS for SYSTEM.LOG since 
> the table is immutable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4709) Alter split policy in upgrade path for system tables

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16457002#comment-16457002
 ] 

Hudson commented on PHOENIX-4709:
-

FAILURE: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #108 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/108/])
PHOENIX-4709 Alter split policy in upgrade path for system tables (jtaylor: rev 
59d704e4d884e8c38168b3a0c19a9620f5f32ce6)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> Alter split policy in upgrade path for system tables
> 
>
> Key: PHOENIX-4709
> URL: https://issues.apache.org/jira/browse/PHOENIX-4709
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4709_v1.patch
>
>
> With PHOENIX-4700, the split policy would only change for new installations. 
> For existing installations, the schema of system tables only changes in the 
> upgrade path, including for HBase metadata now. Thus we need an ALTER TABLE 
> call in our upgrade path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4714) Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 branch

2018-04-27 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-4714.
---
   Resolution: Fixed
Fix Version/s: 4.14.0

Pushed to 4.x-HBase-0.98

[~jamestaylor]

> Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 
> branch
> ---
>
> Key: PHOENIX-4714
> URL: https://issues.apache.org/jira/browse/PHOENIX-4714
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4714.v1.4.x-HBase-0.98.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4714) Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 branch

2018-04-27 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4714:
--
Attachment: cit.patch

> Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 
> branch
> ---
>
> Key: PHOENIX-4714
> URL: https://issues.apache.org/jira/browse/PHOENIX-4714
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: PHOENIX-4714.v1.4.x-HBase-0.98.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4714) Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 branch

2018-04-27 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4714:
--
Attachment: (was: cit.patch)

> Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 
> branch
> ---
>
> Key: PHOENIX-4714
> URL: https://issues.apache.org/jira/browse/PHOENIX-4714
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: PHOENIX-4714.v1.4.x-HBase-0.98.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4714) Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 branch

2018-04-27 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4714:
--
Attachment: PHOENIX-4714.v1.4.x-HBase-0.98.patch

> Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 
> branch
> ---
>
> Key: PHOENIX-4714
> URL: https://issues.apache.org/jira/browse/PHOENIX-4714
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: PHOENIX-4714.v1.4.x-HBase-0.98.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4714) Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 4.x-HBase-0.98 branch

2018-04-27 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-4714:
-

 Summary: Fix WALReplayWithIndexWritesAndCompressedWALIT failure in 
4.x-HBase-0.98 branch
 Key: PHOENIX-4714
 URL: https://issues.apache.org/jira/browse/PHOENIX-4714
 Project: Phoenix
  Issue Type: Test
Affects Versions: 4.14.0
Reporter: Vincent Poon
Assignee: Vincent Poon






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4713) ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues is failing on master after PHOENIX-4531

2018-04-27 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-4713:
---

 Summary: 
ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues is failing on master 
after PHOENIX-4531 
 Key: PHOENIX-4713
 URL: https://issues.apache.org/jira/browse/PHOENIX-4713
 Project: Phoenix
  Issue Type: Bug
Reporter: Thomas D'Silva
Assignee: Vincent Poon


ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues:214 Expected to find 
PK in data table: (0,0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4645) PhoenixStorageHandler doesn't handle correctly data/timestamp in push down predicate when engine is tez.

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456797#comment-16456797
 ] 

Hudson commented on PHOENIX-4645:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1845 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1845/])
PHOENIX-4645 PhoenixStorageHandler doesn't handle correctly (rajeshbabu: rev 
8f1cef824b086c7c697688767e0460c18fa554d6)
* (edit) 
phoenix-hive/src/it/java/org/apache/phoenix/hive/HivePhoenixStoreIT.java
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/constants/PhoenixStorageHandlerConstants.java
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/query/PhoenixQueryBuilder.java


> PhoenixStorageHandler doesn't handle correctly data/timestamp in push down 
> predicate when engine is tez. 
> -
>
> Key: PHOENIX-4645
> URL: https://issues.apache.org/jira/browse/PHOENIX-4645
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
>  Labels: HivePhoenix
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4645-wip.patch, PHOENIX-4645.patch
>
>
> DDLs:
> {noformat}
> CREATE TABLE TEST_PHOENIX
> (
> PART_ID BIGINT NOT NULL,
> COMMIT_TIMESTAMP TIMESTAMP,
> CONSTRAINT pk PRIMARY KEY (PART_ID)
> )
> SALT_BUCKETS=9;
> CREATE EXTERNAL TABLE TEST_HIVE
> (
> PART_ID BIGINT,
> SOURCEDB_COMMIT_TIMESTAMP TIMESTAMP
> )
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> TBLPROPERTIES
> (
> "phoenix.table.name" = "TEST_PHOENIX",
> "phoenix.zookeeper.quorum" = "localhost",
> "phoenix.zookeeper.znode.parent" = "/hbase",
> "phoenix.zookeeper.client.port" = "2181",
> "phoenix.rowkeys" = "PART_ID",
> "phoenix.column.mapping" = 
> "part_id:PART_ID,sourcedb_commit_timestamp:COMMIT_TIMESTAMP"
> );
> {noformat}
> Query :
> {noformat}
> hive> select * from TEST_HIVE2 where sourcedb_commit_timestamp between 
> '2018-03-01 01:00:00.000' and  '2018-03-20 01:00:00.000';
> OK
> Failed with exception java.io.IOException:java.lang.RuntimeException: 
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. TIMESTAMP and VARCHAR for "sourcedb_commit_timestamp" >= 
> '2018-03-01 01:00:00.000'
> {noformat}
> That happens because we don't use mapped column name when we check whether we 
> need to apply to_timestamp/to_date function. For the default mapping, we 
> regexp patterns don't take into account that column name is double quoted. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4699) Stop scan after finding first child of table during drop

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456798#comment-16456798
 ] 

Hudson commented on PHOENIX-4699:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1845 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1845/])
PHOENIX-4699 Stop scan after finding first child of table during drop (jtaylor: 
rev 041e7b5a05213710fc2af0644145d8247cb02191)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


> Stop scan after finding first child of table during drop
> 
>
> Key: PHOENIX-4699
> URL: https://issues.apache.org/jira/browse/PHOENIX-4699
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4699.patch
>
>
> Rather than scan all children when dropping a table or view, we should stop 
> at the first one (unless we've issued a drop cascade).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4707) Python driver not included in assembly

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456804#comment-16456804
 ] 

Hudson commented on PHOENIX-4707:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1845 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1845/])
PHOENIX-4707 Include python driver in assembly (elserj: rev 
4b84199681f241843855ce652e582e06bb99c1cf)
* (edit) phoenix-assembly/src/build/src.xml
* (edit) phoenix-assembly/src/build/components/all-common-files.xml


> Python driver not included in assembly
> --
>
> Key: PHOENIX-4707
> URL: https://issues.apache.org/jira/browse/PHOENIX-4707
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4707.001.patch
>
>
> I'm not quite sure how I managed this, but the python driver isn't included 
> in the normal assembly. I would have swore that I had this included in the 
> initial commit, but it seems like it doesn't happen now.
> Need to get this in the src and bin tarballs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4698) Tolerate orphaned views

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456799#comment-16456799
 ] 

Hudson commented on PHOENIX-4698:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1845 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1845/])
PHOENIX-4698 Tolerate orphaned views (Maddineni Sukumar) (jtaylor: rev 
8dd637dc41ae9fa1cb9c3111481e216059f95c34)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


> Tolerate orphaned views
> ---
>
> Key: PHOENIX-4698
> URL: https://issues.apache.org/jira/browse/PHOENIX-4698
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Maddineni Sukumar
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4698.patch
>
>
> It's possible that under rare circumstances that views get orphaned. We 
> should make sure that this situation is tolerated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4700) Fix split policy on system tables other than SYSTEM.CATALOG

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456800#comment-16456800
 ] 

Hudson commented on PHOENIX-4700:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1845 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1845/])
PHOENIX-4700 Fix split policy on system tables other than SYSTEM.CATALOG 
(jtaylor: rev 5c637e6dd4aabb9d8afe0e07a81533a96e111bfb)
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/schema/SystemStatsSplitPolicy.java
* (add) 
phoenix-core/src/test/java/org/apache/phoenix/schema/SystemSplitPolicyTest.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/schema/SystemFunctionSplitPolicy.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/schema/SplitOnLeadingVarCharColumnsPolicy.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java


> Fix split policy on system tables other than SYSTEM.CATALOG
> ---
>
> Key: PHOENIX-4700
> URL: https://issues.apache.org/jira/browse/PHOENIX-4700
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4700.patch, PHOENIX-4700_wip1.patch
>
>
> The MetaDataSplitPolicy was changed to cause a table to never split. This is 
> the right thing to do for SYSTEM.CATALOG, but not for the other system tables 
> that use it. We need to create a new split policy as it was before for these 
> other system tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4686) Phoenix stats does not account for server side limit push downs

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456802#comment-16456802
 ] 

Hudson commented on PHOENIX-4686:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1845 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1845/])
PHOENIX-4686 Phoenix stats does not account for server side limit push 
(jtaylor: rev bebe66c4680a7cc3b09a703f8608f966ad4905f1)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ParallelIterators.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsUtil.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/SerialIterators.java


> Phoenix stats does not account for server side limit push downs
> ---
>
> Key: PHOENIX-4686
> URL: https://issues.apache.org/jira/browse/PHOENIX-4686
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4686-wip.master.patch, 
> PHOENIX-4686.master.patch, PHOENIX-4686_v2.patch, PHOENIX-4686_wip1.patch
>
>
> For a query like SELECT * FROM FOO LIMIT 10 the EST_BYTES_READ does not take 
> into account a limit correctly when there's no WHERE clause (or a WHERE 
> clause that gets compiled out into start/stop row key on the scan).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4708) Do not propagate GUIDE_POSTS_WIDTH to children

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456803#comment-16456803
 ] 

Hudson commented on PHOENIX-4708:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1845 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1845/])
PHOENIX-4708 Do not propagate GUIDE_POSTS_WIDTH to children (jtaylor: rev 
ff3273480165c2832cf1dc1bd114a74b93cf9e72)
* (edit) phoenix-core/src/main/java/org/apache/phoenix/schema/TableProperty.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


> Do not propagate GUIDE_POSTS_WIDTH to children
> --
>
> Key: PHOENIX-4708
> URL: https://issues.apache.org/jira/browse/PHOENIX-4708
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4708_addendum1.patch, PHOENIX-4708_v1.patch
>
>
> When the GUIDE_POSTS_WIDTH is altered on a table, it should not be propagated 
> to the view children since it is always only read from the physical table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4694) Prevent locking of parent table when dropping view to reduce contention

2018-04-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456801#comment-16456801
 ] 

Hudson commented on PHOENIX-4694:
-

FAILURE: Integrated in Jenkins build PreCommit-PHOENIX-Build #1845 (See 
[https://builds.apache.org/job/PreCommit-PHOENIX-Build/1845/])
PHOENIX-4694 Prevent locking of parent table when dropping view to (jtaylor: 
rev 9168f6698d6afdf12a825bd6a4e05ccb2a85b9d0)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java


> Prevent locking of parent table when dropping view to reduce contention
> ---
>
> Key: PHOENIX-4694
> URL: https://issues.apache.org/jira/browse/PHOENIX-4694
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4694.patch, PHOENIX-4694_v2.patch
>
>
> When there are many views with the same parent table, there's a lot of 
> contention when adding new views and dropping existing views. The lock is 
> only necessary when creating/dropping indexes, so should be removed in the 
> case of views.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4709) Alter split policy in upgrade path for system tables

2018-04-27 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456766#comment-16456766
 ] 

Thomas D'Silva commented on PHOENIX-4709:
-

+1

> Alter split policy in upgrade path for system tables
> 
>
> Key: PHOENIX-4709
> URL: https://issues.apache.org/jira/browse/PHOENIX-4709
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4709_v1.patch
>
>
> With PHOENIX-4700, the split policy would only change for new installations. 
> For existing installations, the schema of system tables only changes in the 
> upgrade path, including for HBase metadata now. Thus we need an ALTER TABLE 
> call in our upgrade path.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4702) MD5 Hash Algorithm in Phoenix which is insecure and easily cracked

2018-04-27 Thread Koundinya Ravulapati (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456724#comment-16456724
 ] 

Koundinya Ravulapati commented on PHOENIX-4702:
---

[~gjacoby] I could only see these references  
[https://github.com/apache/phoenix/search?utf8=%E2%9C%93&q=MD5&type=] which 
matches the uses you have given and nothing solid to prove the jar is depending 
on MD5 as a cryptographic hash

> MD5 Hash Algorithm in Phoenix which is insecure and easily cracked
> --
>
> Key: PHOENIX-4702
> URL: https://issues.apache.org/jira/browse/PHOENIX-4702
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Koundinya Ravulapati
>Priority: Major
>  Labels: Encryption, Phoenix, Security, hashing
>
> Hi Team,
> We have ran a security check on 
> compile group: 'org.apache.phoenix', name: 'phoenix', version: 
> '4.7.0-CLABS-1.3.0', classifier: 'client-minimal'
> and our security scan has reveled that phoenix is using a week encryption MD5 
> like
> digest = java.security.MessageDigest.getInstance("MD5")
> The hashing algorithm used, MD5, has been found by researchers to be unsafe 
> for protecting sensitive data with today's technology.
> I have checked the [https://github.com/apache/phoenix/tree/4.7.0-HBase-1.1] 
> and also other versions it is still having the same algorithm. Is Phoenix 
> team considering to use more stronger algorithm like SHA-256. Can you please 
> let us know if this is already available any new versions of phoenix or in 
> which version can this be made available if team is working on it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4711) Unable to set property on table with VARBINARY as last column

2018-04-27 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456718#comment-16456718
 ] 

Thomas D'Silva commented on PHOENIX-4711:
-

+1

> Unable to set property on table with VARBINARY as last column
> -
>
> Key: PHOENIX-4711
> URL: https://issues.apache.org/jira/browse/PHOENIX-4711
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4711_v1.patch
>
>
> Our check for preventing the addition of a column kicks in even when you're 
> not adding a column, but instead are trying to set a property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456714#comment-16456714
 ] 

Thomas D'Silva commented on PHOENIX-4712:
-

[~brfrn169]

Thanks for looking into this. I think its failing because we don't update the 
views which are in the existing connection cache when we add an index to the 
parent table.

However we cannot always just add the index to the ptable of the view. We need 
to ensure that the index has all the columns required by the view, and we also 
need to tack on the view where clause so that we don't access rows that can't 
be accessed via the view when using the index (see addIndexesFromParentTable). 
Maybe its easier to just remove the views of this table from the connection 
cache, so that the next time they are resolved the index will be added to the 
view if possible by addIndexesFromParentTable.

 

[~jamestaylor] 

I am wondering if the view in the connection cache of other clients that didn't 
create the index will be able to use the index. When we create an index on a 
table, do we change the parent table timestamp so that all clients get the new 
index when they resolve the parent table? 

. 

 

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456674#comment-16456674
 ] 

Toshihiro Suzuki commented on PHOENIX-4712:
---

[~jamestaylor] No, I didn't configure "phoenix.default.update.cache.frequency". 
I will take a look at addIndexesFromParent() method.  Thanks.



> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456640#comment-16456640
 ] 

James Taylor commented on PHOENIX-4712:
---

And no default update cache frequency configured? If not, I’d recommend 
checking the updateCache code, in particular the addIndexesFromParent() method. 
Why isn’t it updating the cached view? Might try with the latest too as I 
believe there was a bug fixed in this area recently.

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456623#comment-16456623
 ] 

Toshihiro Suzuki commented on PHOENIX-4712:
---

[~jamestaylor] No, I didn't set UPDATE_CACHE_FREQUENCY to any tables and views. 
The DDLs are in the Description.

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456611#comment-16456611
 ] 

James Taylor commented on PHOENIX-4712:
---

This shouldn’t be necessary as we combine table and view indexes on the client 
when we call update cache. Do you have any UPDATE_CACHE_FREQUENCY set?

FYI, [~tdsilva]

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16456582#comment-16456582
 ] 

Toshihiro Suzuki commented on PHOENIX-4712:
---

I just attached a v1 patch. 

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4712:
--
Attachment: PHOENIX-4712.patch

> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Attachments: PHOENIX-4712.patch
>
>
> Steps to reproduce are as follows:
> 1. Create a table
> {code}
> create table tbl (col1 varchar primary key, col2 varchar);
> {code}
> 2. Create a view on the table
> {code}
> create view vw (col3 varchar) as select * from tbl;
> {code}
> 3. Create a index on the table
> {code}
> create index idx ON tbl (col2);
> {code}
> After those, when issuing a explain query like the following, it seems like 
> the query doesn't use the index, although the index should be used: 
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> +---+
> | PLAN  |
> +---+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
> | SERVER FILTER BY COL2 = 'aaa' |
> +---+
> {code}
> However, after restarting sqlline, the explain output is changed, and the 
> index is used.
> {code}
> 0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
> 'aaa';
> ++
> |  PLAN   
>|
> ++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL
>|
> | SKIP-SCAN-JOIN TABLE 0  
>|
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX 
> ['aaa']  |
> | SERVER FILTER BY FIRST KEY ONLY 
>|
> | DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5)   
>|
> ++
> {code}
> I think when creating an index on a table, meta data cache of views related 
> to the table isn't updated, so the index isn't used for that query. However 
> after restarting sqlline, the meta data cache is refreshed, so the index is 
> used.
> When creating an index on a table, we should update meta data cache of views 
> related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated PHOENIX-4712:
--
Description: 
Steps to reproduce are as follows:
1. Create a table
{code}
create table tbl (col1 varchar primary key, col2 varchar);
{code}

2. Create a view on the table
{code}
create view vw (col3 varchar) as select * from tbl;
{code}

3. Create a index on the table
{code}
create index idx ON tbl (col2);
{code}

After those, when issuing a explain query like the following, it seems like the 
query doesn't use the index, although the index should be used: 
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
'aaa';
+---+
| PLAN  |
+---+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
| SERVER FILTER BY COL2 = 'aaa' |
+---+
{code}

However, after restarting sqlline, the explain output is changed, and the index 
is used.
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where col2 = 
'aaa';
++
|  PLAN 
 |
++
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  
 |
| SKIP-SCAN-JOIN TABLE 0
 |
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX ['aaa'] 
 |
| SERVER FILTER BY FIRST KEY ONLY   
 |
| DYNAMIC SERVER FILTER BY "VW.COL1" IN ($3.$5) 
 |
++
{code}

I think when creating an index on a table, meta data cache of views related to 
the table isn't updated, so the index isn't used for that query. However after 
restarting sqlline, the meta data cache is refreshed, so the index is used.

When creating an index on a table, we should update meta data cache of views 
related to the table.

  was:
Steps to reproduce are as follows:
1. Create a table
{code}
create table tbl (aaa varchar primary key, bbb varchar);
{code}

2. Create a view on the table
{code}
create view vw (ccc varchar) as select * from tbl;
{code}

3. Create a index on the table
{code}
create index idx ON tbl (bbb);
{code}

After those, when issuing a explain query like the following, it seems like the 
query doesn't use the index, although the index should be used: 
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where bbb = 
'aaa';
+---+
| PLAN  |
+---+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
| SERVER FILTER BY BBB = 'aaa'  |
+---+
{code}

However, after restarting sqlline, the explain output is changed, and the index 
is used.
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where bbb = 
'aaa';
++
|  PLAN 
 |
++
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  
 |
| SKIP-SCAN-JOIN TABLE 0
 |
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX ['aaa'] 
 |
| SERVER FILTER BY FIRST KEY ONLY   
 |
| DYNAMIC SERVER FILTER BY "VW.AAA" IN ($3.$5)  
 |
++
{code}

I think when creating an index on a table, meta data cache of views related to 
the table isn't updated, so the index isn't used for that query. However after 
restarting sqlline, the meta data cache is refreshed, so the index is used.

When creating an index on a table, we should update meta data cache of views 
related to the table.


> When creating an index on a table, meta data cache of views related to the 
> table isn't updated
> --
>
> Key: PHOENIX-4712
> URL: https://issues.apache.org/jira/browse/PHOENIX-4712
>   

[jira] [Created] (PHOENIX-4712) When creating an index on a table, meta data cache of views related to the table isn't updated

2018-04-27 Thread Toshihiro Suzuki (JIRA)
Toshihiro Suzuki created PHOENIX-4712:
-

 Summary: When creating an index on a table, meta data cache of 
views related to the table isn't updated
 Key: PHOENIX-4712
 URL: https://issues.apache.org/jira/browse/PHOENIX-4712
 Project: Phoenix
  Issue Type: Bug
Reporter: Toshihiro Suzuki
Assignee: Toshihiro Suzuki


Steps to reproduce are as follows:
1. Create a table
{code}
create table tbl (aaa varchar primary key, bbb varchar);
{code}

2. Create a view on the table
{code}
create view vw (ccc varchar) as select * from tbl;
{code}

3. Create a index on the table
{code}
create index idx ON tbl (bbb);
{code}

After those, when issuing a explain query like the following, it seems like the 
query doesn't use the index, although the index should be used: 
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where bbb = 
'aaa';
+---+
| PLAN  |
+---+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  |
| SERVER FILTER BY BBB = 'aaa'  |
+---+
{code}

However, after restarting sqlline, the explain output is changed, and the index 
is used.
{code}
0: jdbc:phoenix:> explain select /*+ INDEX(vw idx) */ * from vw where bbb = 
'aaa';
++
|  PLAN 
 |
++
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN FULL SCAN OVER TBL  
 |
| SKIP-SCAN-JOIN TABLE 0
 |
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER IDX ['aaa'] 
 |
| SERVER FILTER BY FIRST KEY ONLY   
 |
| DYNAMIC SERVER FILTER BY "VW.AAA" IN ($3.$5)  
 |
++
{code}

I think when creating an index on a table, meta data cache of views related to 
the table isn't updated, so the index isn't used for that query. However after 
restarting sqlline, the meta data cache is refreshed, so the index is used.

When creating an index on a table, we should update meta data cache of views 
related to the table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)