[jira] [Comment Edited] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385385#comment-15385385
 ] 

chenglei edited comment on PHOENIX-2900 at 7/20/16 6:04 AM:


[~jamestaylor],Maybe dependent HBase version is Error? Just from HBase 1.2.0, 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler's dispatch method's return type 
is boolean, before  HBase 1.2.0,  dispatch method's return type is void.


was (Author: comnetwork):
Maybe dependent HBase version is Error? Just from HBase 1.2.0, 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler's dispatch method's return type 
is boolean, before  HBase 1.2.0,  dispatch method's return type is void.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385385#comment-15385385
 ] 

chenglei commented on PHOENIX-2900:
---

Maybe dependent HBase version is Error? Just from HBase 1.2.0, 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler's dispatch method's return type 
is boolean, before  HBase 1.2.0,  dispatch method's return type is void.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>

[jira] [Commented] (PHOENIX-3091) Licensing issues with binary release

2016-07-19 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15385282#comment-15385282
 ] 

Josh Elser commented on PHOENIX-3091:
-

Progress through the bundled dependencies is going rather slowly. I'm about 50% 
of the way through the list of bundled artifacts from the shade-plugin's 
output. I hope to finish that list tmrw morning.

Current staged changes 
https://github.com/apache/phoenix/compare/master...joshelser:binary-artifact-licensing?expand=1

> Licensing issues with binary release
> 
>
> Key: PHOENIX-3091
> URL: https://issues.apache.org/jira/browse/PHOENIX-3091
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>
> Umbrella issue to track fixes to the binary release for Apache Phoenix.
> Original thread: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3101) Remove net.sourceforge.findbugs:annotations

2016-07-19 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-3101:
---

 Summary: Remove net.sourceforge.findbugs:annotations
 Key: PHOENIX-3101
 URL: https://issues.apache.org/jira/browse/PHOENIX-3101
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Josh Elser


This artifact is licensed as LGPL which is a huge 
[no-no|http://www.apache.org/legal/resolved.html#category-x].

Thankfully, there is a clean-room implementation of the spec which is ASLv2 
licensed https://github.com/stephenc/findbugs-annotations in 
com.github.stephenc.findbugs:findbugs-annotations:1.3.9-1 (which HBase is 
already using).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384829#comment-15384829
 ] 

ASF GitHub Bot commented on PHOENIX-3097:
-

Github user SsnL commented on the issue:

https://github.com/apache/phoenix/pull/185
  
Rebasing is done.


> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #185: [PHOENIX-3097] Incompatibilities with HBase 0.98.6

2016-07-19 Thread SsnL
Github user SsnL commented on the issue:

https://github.com/apache/phoenix/pull/185
  
Rebasing is done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Environment: (was: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2)

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.i

[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Attachment: PHOENIX-2900_v1.patch

That's weird, [~elserj] b/c the class it's complaining about didn't even 
change. I'm going to try one more time after deleting the other patch files.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientSer

[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Attachment: (was: PHOENIX-2900_v1.patch)

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.jav

[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Attachment: (was: PHOENIX-2900.patch)

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.h

[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Attachment: (was: phoenix-2900.patch)

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900_v1.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.h

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384745#comment-15384745
 ] 

Josh Elser commented on PHOENIX-2900:
-

[~jamestaylor], looks like it did run, but compilation failed (and it chose not 
to comment?)

https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-PHOENIX-Build/454/console

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on 
project phoenix-core: Compilation failure: Compilation failure:
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java:[32,8]
 org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler is not abstract and does not 
override abstract method dispatch(org.apache.hadoop.hbase.ipc.CallRunner) in 
org.apache.hadoop.hbase.ipc.RpcScheduler
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java:[84,20]
 dispatch(org.apache.hadoop.hbase.ipc.CallRunner) in 
org.apache.hadoop.hbase.ipc.PhoenixRpcScheduler cannot override 
dispatch(org.apache.hadoop.hbase.ipc.CallRunner) in 
org.apache.hadoop.hbase.ipc.RpcScheduler
[ERROR] return type boolean is not compatible with void
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcScheduler.java:[88,46]
 incompatible types
[ERROR] required: boolean
{code}

Seems like something happened in the precommit build that made it think this 
was a configuration error and that the job failed (instead of the patched build 
actually failed).

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900.patch, PHOENIX-2900_v1.patch, 
> phoenix-2900.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExe

[jira] [Commented] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384733#comment-15384733
 ] 

James Taylor commented on PHOENIX-2900:
---

[~elserj] - I can't seem to force a test run for the patch against this JIRA. 
Any ideas what I'm doing wrong?

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900.patch, PHOENIX-2900_v1.patch, 
> phoenix-2900.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServe

Re: [DISCUSS] Licensing in 4.8.0 rc0 (was Fwd: Re: [VOTE] Release of Apache Phoenix 4.8.0-HBase-1.2 RC0)

2016-07-19 Thread James Taylor
Ok, that's great then. We can just combine the separate vote emails into
one then. Much easier.
Thanks,
James

On Tuesday, July 19, 2016, Josh Elser  wrote:

> Agreed with Sean. There's no reason that I'm aware of that each target
> HBase version has to be its own VOTE thread. The semantics of
> "all-or-none" would definitely seem logical to be encapsulated in one
> vote thread.
>
> On Tue, Jul 19, 2016 at 9:26 AM, Sean Busbey  > wrote:
> > AFAIK, PMCs can organize their VOTEs as they please. The only
> > requirement I'm aware of is being able to point at a VOTE that covers
> > the release. I don't see why a single VOTE that covers multiple git
> > REFs and multiple artifacts (even in different directories on
> > dist.apache) would be a problem. I can think of one case where this
> > was done before (Apache NiFi; I think they were in the incubator at
> > the time).
> >
> > Agreed that this kind of process change doesn't need to be blocking.
> > It's just confusing that right now we can end up with a mixed vote
> > result across hbase compatibility layers (although I guess that could
> > be considered a feature if a fatal compability-layer-specific bug were
> > to show up).
> >
> > On Tue, Jul 19, 2016 at 1:33 AM, James Taylor  > wrote:
> >> If we could have a single vote, that'd be great, but I didn't think that
> >> was possible. Would we be voting on the union of all the source codes
> >> across all four branches? Is it acceptable to be voting on multiple
> >> hash/tags (since they're in different branches)? What about binary
> release?
> >> We'd have multiple tar files, one per branch.
> >>
> >> There's a fair amount of automation and process already developed for
> our
> >> release procedure. This is the way we've been doing things for the last
> 10+
> >> releases (for good or for bad). Unless the new process would be more or
> >> less the same as the old, I think we need to get 4.8.0 out first
> (following
> >> all ASF policies, of course), before changing our documentation,
> >> automation, etc.
> >>
> >> On Tue, Jul 19, 2016 at 8:17 AM, Enis Söztutar  > wrote:
> >>
> >>> The licensing issues should affect all 4 RCs, so they all should fail
> or
> >>> succeed atomically. Having 4.8.0-HBase-0.98 with slightly different
> content
> >>> than 4.8.0-HBase-1.1, etc is just asking for trouble.
> >>>
> >>> Thinking about this, doing the votes together makes sense. Otherwise,
> we
> >>> might end up with 4.8.0 meaning a different thing for different hbase
> >>> versions.
> >>>
> >>> Enis
> >>>
> >>> On Mon, Jul 18, 2016 at 10:34 PM, Sean Busbey  > wrote:
> >>>
> >>> > Am I reading the tallies correctly?
> >>> >
> >>> > 0.98: pass with four +1s
> >>> > 1.0: pass with four +1s
> >>> > 1.1: fail with two +1s
> >>> > 1.2: pass with three +1s, one -1, and one non-binding -1
> >>> >
> >>> > This presumes I did not miss a vote cancellation from a release
> manager
> >>> > (which I've done in the past, tbf).
> >>> >
> >>> > As an aside, could we do these as a single vote in the future?
> >>> >
> >>> > --
> >>> > Sean Busbey
> >>> > On Jul 18, 2016 17:47, "Josh Elser"  > wrote:
> >>> >
> >>> > > Thanks for the response, Andrew!
> >>> > >
> >>> > > I've started knocking out the source-release issues. Will put up a
> >>> patch
> >>> > > with how far I get tonight.
> >>> > >
> >>> > > Andrew Purtell wrote:
> >>> > >
> >>> > >> With PMC hat on I am -1 releasing with known policy violations.
> This
> >>> is
> >>> > >> the same position I took when it was HBase releases at issue.
> Option 1
> >>> > is
> >>> > >> not a good option. Let's go with another.
> >>> > >>
> >>> > >>
> >>> > >> On Jul 18, 2016, at 1:53 PM, Josh Elser >  wrote:
> >>> > >>>
> >>> > >>> (Moving this over to its own thread to avoid bogging down the
> VOTE
> >>> > >>> further)
> >>> > >>>
> >>> > >>> PMC, what say you? I have cycles to work on this now.
> >>> > >>>
> >>> > >>>  Original Message 
> >>> > >>> Subject: Re: [VOTE] Release of Apache Phoenix 4.8.0-HBase-1.2 RC0
> >>> > >>> Date: Mon, 18 Jul 2016 14:43:54 -0400
> >>> > >>> From: Josh Elser>
> >>> > >>> To: dev@phoenix.apache.org 
> >>> > >>>
> >>> > >>> Sean Busbey wrote:
> >>> > >>>
> >>> >  On Mon, Jul 18, 2016 at 12:05 PM, Ankit Singhal
> >>> >  >   wrote:
> >>> > 
> >>> > > Now we have three options to go forward with 4.8 release (or
> >>> whether
> >>> > to
> >>> > > include licenses and notices for the dependency used now or
> >>> later):-
> >>> > >
> >>> > > *Option 1:- Go with this RC0 for 4.8 release.*
> >>> > > -- As the build is functionally good and stable.
> >>> > > -- It has been delayed already and there are some
> project
> >>> > > which are
> >>> > > relying on this(as 4.8 works with HBase 1.2)
> >>> > > -- We have been releasing like this from past few
> releases.
> >>> > > -- RC has binding votes required for go head.
> >>> > > -- Fix license and notice issue in fut

[jira] [Commented] (PHOENIX-1367) VIEW derived from another VIEW doesn't use parent VIEW indexes

2016-07-19 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384711#comment-15384711
 ] 

James Taylor commented on PHOENIX-1367:
---

FYI, I filed PHOENIX-3100 to prevent users from falling into this trap.

> VIEW derived from another VIEW doesn't use parent VIEW indexes
> --
>
> Key: PHOENIX-1367
> URL: https://issues.apache.org/jira/browse/PHOENIX-1367
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: PHOENIX_1367.test.patch
>
>
> If a VIEW has an index and another VIEW is derived from it, the child view 
> will not use the parent view's indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384710#comment-15384710
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71407328
  
--- Diff: LICENSE ---
@@ -200,3 +200,90 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+---
+
+This product bundles Sqlline (https://github.com/julianhyde/sqlline)
+which is licensed under the 3-clause BSD license
+
+Copyright (c) 2002,2003,2004,2005,2006,2007 Marc Prud'hommeaux
+Copyright (c) 2004-2010 The Eigenbase Project
+Copyright (c) 2013-2014 Julian Hyde
+All rights reserved.
+
+---
+
+This product bundles portions of AngularJS (https://angularjs.org/) which
--- End diff --

Oh right. That was done in 7f54cd8. Still though, that was straightforward 
to fix.


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3100) Disallow creation of index on table/view with child views

2016-07-19 Thread James Taylor (JIRA)
James Taylor created PHOENIX-3100:
-

 Summary: Disallow creation of index on table/view with child views
 Key: PHOENIX-3100
 URL: https://issues.apache.org/jira/browse/PHOENIX-3100
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor


Until PHOENIX-1499 is fixed, we should disallow the creation of indexes on a 
table or view that has child views. In addition, if a view already has indexes, 
we should disallow the creation of child views.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71407328
  
--- Diff: LICENSE ---
@@ -200,3 +200,90 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+---
+
+This product bundles Sqlline (https://github.com/julianhyde/sqlline)
+which is licensed under the 3-clause BSD license
+
+Copyright (c) 2002,2003,2004,2005,2006,2007 Marc Prud'hommeaux
+Copyright (c) 2004-2010 The Eigenbase Project
+Copyright (c) 2013-2014 Julian Hyde
+All rights reserved.
+
+---
+
+This product bundles portions of AngularJS (https://angularjs.org/) which
--- End diff --

Oh right. That was done in 7f54cd8. Still though, that was straightforward 
to fix.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384708#comment-15384708
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71406832
  
--- Diff: LICENSE ---
@@ -200,3 +200,90 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+---
+
+This product bundles Sqlline (https://github.com/julianhyde/sqlline)
+which is licensed under the 3-clause BSD license
+
+Copyright (c) 2002,2003,2004,2005,2006,2007 Marc Prud'hommeaux
+Copyright (c) 2004-2010 The Eigenbase Project
+Copyright (c) 2013-2014 Julian Hyde
+All rights reserved.
+
+---
+
+This product bundles portions of AngularJS (https://angularjs.org/) which
--- End diff --

What about the missing ASF header for the javascript files?


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71406832
  
--- Diff: LICENSE ---
@@ -200,3 +200,90 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+---
+
+This product bundles Sqlline (https://github.com/julianhyde/sqlline)
+which is licensed under the 3-clause BSD license
+
+Copyright (c) 2002,2003,2004,2005,2006,2007 Marc Prud'hommeaux
+Copyright (c) 2004-2010 The Eigenbase Project
+Copyright (c) 2013-2014 Julian Hyde
+All rights reserved.
+
+---
+
+This product bundles portions of AngularJS (https://angularjs.org/) which
--- End diff --

What about the missing ASF header for the javascript files?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384704#comment-15384704
 ] 

ASF GitHub Bot commented on PHOENIX-3097:
-

Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/185
  
@apurtell - would you have a few spare cycles to take a look, in particular 
at the replacements for User.runAsLoginUser()? These changes are so that older 
0.98 versions still work with Phoenix 4.8.


> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #185: [PHOENIX-3097] Incompatibilities with HBase 0.98.6

2016-07-19 Thread JamesRTaylor
Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/185
  
@apurtell - would you have a few spare cycles to take a look, in particular 
at the replacements for User.runAsLoginUser()? These changes are so that older 
0.98 versions still work with Phoenix 4.8.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384700#comment-15384700
 ] 

ASF GitHub Bot commented on PHOENIX-3097:
-

Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/185
  
Looks like there are merge conflicts - would you mind rebasing, @SsnL?


> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #185: [PHOENIX-3097] Incompatibilities with HBase 0.98.6

2016-07-19 Thread JamesRTaylor
Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/185
  
Looks like there are merge conflicts - would you mind rebasing, @SsnL?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3099) Update to Sqlline 1.1.10

2016-07-19 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384656#comment-15384656
 ] 

Josh Elser commented on PHOENIX-3099:
-

When we do get this update, we need to add the license headers to 
{{examples/\*\*/\*.sql}} and remove the apache-rat-plugin exclusions on the 
same string in the top level pom.xml.

> Update to Sqlline 1.1.10
> 
>
> Key: PHOENIX-3099
> URL: https://issues.apache.org/jira/browse/PHOENIX-3099
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
> Fix For: 4.9.0
>
>
> One of the bugfixes that sqlline 1.1.10 will likely include is a fix for 
> running SQL files which start with a comment. We should try to push for a 
> release and then upgrade Phoenix to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384655#comment-15384655
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/183
  
7d9adab removes the license headers from `examples/**/*.sql` and re-adds 
the exclusions to apache-rat-plugin. Filed 
https://issues.apache.org/jira/browse/PHOENIX-3099 to track it.


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread joshelser
Github user joshelser commented on the issue:

https://github.com/apache/phoenix/pull/183
  
7d9adab removes the license headers from `examples/**/*.sql` and re-adds 
the exclusions to apache-rat-plugin. Filed 
https://issues.apache.org/jira/browse/PHOENIX-3099 to track it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (PHOENIX-3099) Update to Sqlline 1.1.10

2016-07-19 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-3099:
---

 Summary: Update to Sqlline 1.1.10
 Key: PHOENIX-3099
 URL: https://issues.apache.org/jira/browse/PHOENIX-3099
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
 Fix For: 4.9.0


One of the bugfixes that sqlline 1.1.10 will likely include is a fix for 
running SQL files which start with a comment. We should try to push for a 
release and then upgrade Phoenix to use it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384643#comment-15384643
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71396909
  
--- Diff: LICENSE ---
@@ -200,3 +200,90 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+---
+
+This product bundles Sqlline (https://github.com/julianhyde/sqlline)
+which is licensed under the 3-clause BSD license
+
+Copyright (c) 2002,2003,2004,2005,2006,2007 Marc Prud'hommeaux
+Copyright (c) 2004-2010 The Eigenbase Project
+Copyright (c) 2013-2014 Julian Hyde
+All rights reserved.
+
+---
+
+This product bundles portions of AngularJS (https://angularjs.org/) which
--- End diff --

The text added here should be all that is needed for 4.8.0's source release 
for the trace ui stuff. The accompanying binary artifact just needs the same 
text copied into its LICENSE and NOTICE. While we can move this out for 4.8.0, 
I think I'd say we can decide what to do in a 4.9.0 (does this code get 
dropped, moved into some other repo, etc). We certainly can just remove it now 
if desired.


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71396909
  
--- Diff: LICENSE ---
@@ -200,3 +200,90 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+---
+
+This product bundles Sqlline (https://github.com/julianhyde/sqlline)
+which is licensed under the 3-clause BSD license
+
+Copyright (c) 2002,2003,2004,2005,2006,2007 Marc Prud'hommeaux
+Copyright (c) 2004-2010 The Eigenbase Project
+Copyright (c) 2013-2014 Julian Hyde
+All rights reserved.
+
+---
+
+This product bundles portions of AngularJS (https://angularjs.org/) which
--- End diff --

The text added here should be all that is needed for 4.8.0's source release 
for the trace ui stuff. The accompanying binary artifact just needs the same 
text copied into its LICENSE and NOTICE. While we can move this out for 4.8.0, 
I think I'd say we can decide what to do in a 4.9.0 (does this code get 
dropped, moved into some other repo, etc). We certainly can just remove it now 
if desired.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384637#comment-15384637
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71396256
  
--- Diff: examples/STOCK_SYMBOL.sql ---
@@ -1,3 +1,19 @@
+-- Licensed to the Apache Software Foundation (ASF) under one
+-- or more contributor license agreements.  See the NOTICE file
+-- distributed with this work for additional information
+-- regarding copyright ownership.  The ASF licenses this file
+-- to you under the Apache License, Version 2.0 (the
+-- "License"); you may not use this file except in compliance
+-- with the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
--- End diff --

Great. Thanks for the catch! I totally did not use it. Let me re-add these 
to the exclusions and make an issue to track this information.


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread joshelser
Github user joshelser commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71396256
  
--- Diff: examples/STOCK_SYMBOL.sql ---
@@ -1,3 +1,19 @@
+-- Licensed to the Apache Software Foundation (ASF) under one
+-- or more contributor license agreements.  See the NOTICE file
+-- distributed with this work for additional information
+-- regarding copyright ownership.  The ASF licenses this file
+-- to you under the Apache License, Version 2.0 (the
+-- "License"); you may not use this file except in compliance
+-- with the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
--- End diff --

Great. Thanks for the catch! I totally did not use it. Let me re-add these 
to the exclusions and make an issue to track this information.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384620#comment-15384620
 ] 

ASF GitHub Bot commented on PHOENIX-3097:
-

GitHub user SsnL opened a pull request:

https://github.com/apache/phoenix/pull/185

[PHOENIX-3097] Incompatibilities with HBase 0.98.6

This fixes most of the incompatibilities. However, there is a call to 
`HBaseAdmin.truncateTable` (not available in 0.98.6) at 
https://github.com/apache/phoenix/blob/4.x-HBase-0.98/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L598.
 This isn't easy to fix as the logic in convoluted in HBase side..

Fortunately, this only happens when users of view indexes upgrade from an 
older Phoenix version. We could alert users with HBase 0.98.6 before they 
upgrades.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SsnL/phoenix PHOENIX-3097

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #185






> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #185: [PHOENIX-3097] Incompatibilities with HBase 0.98....

2016-07-19 Thread SsnL
GitHub user SsnL opened a pull request:

https://github.com/apache/phoenix/pull/185

[PHOENIX-3097] Incompatibilities with HBase 0.98.6

This fixes most of the incompatibilities. However, there is a call to 
`HBaseAdmin.truncateTable` (not available in 0.98.6) at 
https://github.com/apache/phoenix/blob/4.x-HBase-0.98/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java#L598.
 This isn't easy to fix as the logic in convoluted in HBase side..

Fortunately, this only happens when users of view indexes upgrade from an 
older Phoenix version. We could alert users with HBase 0.98.6 before they 
upgrades.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SsnL/phoenix PHOENIX-3097

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/185.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #185






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384594#comment-15384594
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user mujtabachohan commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71390004
  
--- Diff: examples/STOCK_SYMBOL.sql ---
@@ -1,3 +1,19 @@
+-- Licensed to the Apache Software Foundation (ASF) under one
+-- or more contributor license agreements.  See the NOTICE file
+-- distributed with this work for additional information
+-- regarding copyright ownership.  The ASF licenses this file
+-- to you under the Apache License, Version 2.0 (the
+-- "License"); you may not use this file except in compliance
+-- with the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
--- End diff --

This is not fixed in sqlline 1.1.9 that we are using. Fix will be part of 
yet unreleased 1.1.10. However this works fine using psql.py script.


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread mujtabachohan
Github user mujtabachohan commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71390004
  
--- Diff: examples/STOCK_SYMBOL.sql ---
@@ -1,3 +1,19 @@
+-- Licensed to the Apache Software Foundation (ASF) under one
+-- or more contributor license agreements.  See the NOTICE file
+-- distributed with this work for additional information
+-- regarding copyright ownership.  The ASF licenses this file
+-- to you under the Apache License, Version 2.0 (the
+-- "License"); you may not use this file except in compliance
+-- with the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
--- End diff --

This is not fixed in sqlline 1.1.9 that we are using. Fix will be part of 
yet unreleased 1.1.10. However this works fine using psql.py script.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3098) Possible NegativeArraySizeException while scanning local indexes during regions merge

2016-07-19 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384588#comment-15384588
 ] 

Sergey Soldatov commented on PHOENIX-3098:
--

LGTM +1.

> Possible NegativeArraySizeException while scanning local indexes during 
> regions merge 
> --
>
> Key: PHOENIX-3098
> URL: https://issues.apache.org/jira/browse/PHOENIX-3098
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergio Peleato
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3098.patch
>
>
> While scanning local indexes during regions merge we might end up with 
> NegativeArraySizeException which leads to RS down. The reason for this is 
> some times HBase won't do real seek and considered fake keyvalues(can be scan 
> start row) as seeked kvs. In that case we ended up with this issue when we 
> call peek without seek. So for local indexes we need to enforce seek all the 
> time for scanning local index reference files.
> {noformat}
> 2016-07-15 17:27:04,419 ERROR 
> [B.fifo.QRpcServer.handler=8,queue=2,port=16020] coprocessor.CoprocessorHost: 
> The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NegativeArraySizeException
> java.lang.NegativeArraySizeException
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getNewRowkeyByRegionStartKeyReplacedWithSplitKey(LocalIndexStoreFileScanner.java:242)
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getChangedKey(LocalIndexStoreFileScanner.java:76)
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.peek(LocalIndexStoreFileScanner.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:87)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:71)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.resetKVHeap(StoreScanner.java:378)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:227)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:258)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1284)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1638)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1712)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1677)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1279)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2110)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5568)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2626)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2612)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2594)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2271)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Thanks [~speleato] for finding this issue. Added you as reporter for this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] Licensing in 4.8.0 rc0 (was Fwd: Re: [VOTE] Release of Apache Phoenix 4.8.0-HBase-1.2 RC0)

2016-07-19 Thread Josh Elser
Agreed with Sean. There's no reason that I'm aware of that each target
HBase version has to be its own VOTE thread. The semantics of
"all-or-none" would definitely seem logical to be encapsulated in one
vote thread.

On Tue, Jul 19, 2016 at 9:26 AM, Sean Busbey  wrote:
> AFAIK, PMCs can organize their VOTEs as they please. The only
> requirement I'm aware of is being able to point at a VOTE that covers
> the release. I don't see why a single VOTE that covers multiple git
> REFs and multiple artifacts (even in different directories on
> dist.apache) would be a problem. I can think of one case where this
> was done before (Apache NiFi; I think they were in the incubator at
> the time).
>
> Agreed that this kind of process change doesn't need to be blocking.
> It's just confusing that right now we can end up with a mixed vote
> result across hbase compatibility layers (although I guess that could
> be considered a feature if a fatal compability-layer-specific bug were
> to show up).
>
> On Tue, Jul 19, 2016 at 1:33 AM, James Taylor  wrote:
>> If we could have a single vote, that'd be great, but I didn't think that
>> was possible. Would we be voting on the union of all the source codes
>> across all four branches? Is it acceptable to be voting on multiple
>> hash/tags (since they're in different branches)? What about binary release?
>> We'd have multiple tar files, one per branch.
>>
>> There's a fair amount of automation and process already developed for our
>> release procedure. This is the way we've been doing things for the last 10+
>> releases (for good or for bad). Unless the new process would be more or
>> less the same as the old, I think we need to get 4.8.0 out first (following
>> all ASF policies, of course), before changing our documentation,
>> automation, etc.
>>
>> On Tue, Jul 19, 2016 at 8:17 AM, Enis Söztutar  wrote:
>>
>>> The licensing issues should affect all 4 RCs, so they all should fail or
>>> succeed atomically. Having 4.8.0-HBase-0.98 with slightly different content
>>> than 4.8.0-HBase-1.1, etc is just asking for trouble.
>>>
>>> Thinking about this, doing the votes together makes sense. Otherwise, we
>>> might end up with 4.8.0 meaning a different thing for different hbase
>>> versions.
>>>
>>> Enis
>>>
>>> On Mon, Jul 18, 2016 at 10:34 PM, Sean Busbey  wrote:
>>>
>>> > Am I reading the tallies correctly?
>>> >
>>> > 0.98: pass with four +1s
>>> > 1.0: pass with four +1s
>>> > 1.1: fail with two +1s
>>> > 1.2: pass with three +1s, one -1, and one non-binding -1
>>> >
>>> > This presumes I did not miss a vote cancellation from a release manager
>>> > (which I've done in the past, tbf).
>>> >
>>> > As an aside, could we do these as a single vote in the future?
>>> >
>>> > --
>>> > Sean Busbey
>>> > On Jul 18, 2016 17:47, "Josh Elser"  wrote:
>>> >
>>> > > Thanks for the response, Andrew!
>>> > >
>>> > > I've started knocking out the source-release issues. Will put up a
>>> patch
>>> > > with how far I get tonight.
>>> > >
>>> > > Andrew Purtell wrote:
>>> > >
>>> > >> With PMC hat on I am -1 releasing with known policy violations. This
>>> is
>>> > >> the same position I took when it was HBase releases at issue. Option 1
>>> > is
>>> > >> not a good option. Let's go with another.
>>> > >>
>>> > >>
>>> > >> On Jul 18, 2016, at 1:53 PM, Josh Elser  wrote:
>>> > >>>
>>> > >>> (Moving this over to its own thread to avoid bogging down the VOTE
>>> > >>> further)
>>> > >>>
>>> > >>> PMC, what say you? I have cycles to work on this now.
>>> > >>>
>>> > >>>  Original Message 
>>> > >>> Subject: Re: [VOTE] Release of Apache Phoenix 4.8.0-HBase-1.2 RC0
>>> > >>> Date: Mon, 18 Jul 2016 14:43:54 -0400
>>> > >>> From: Josh Elser
>>> > >>> To: dev@phoenix.apache.org
>>> > >>>
>>> > >>> Sean Busbey wrote:
>>> > >>>
>>> >  On Mon, Jul 18, 2016 at 12:05 PM, Ankit Singhal
>>> > wrote:
>>> > 
>>> > > Now we have three options to go forward with 4.8 release (or
>>> whether
>>> > to
>>> > > include licenses and notices for the dependency used now or
>>> later):-
>>> > >
>>> > > *Option 1:- Go with this RC0 for 4.8 release.*
>>> > > -- As the build is functionally good and stable.
>>> > > -- It has been delayed already and there are some project
>>> > > which are
>>> > > relying on this(as 4.8 works with HBase 1.2)
>>> > > -- We have been releasing like this from past few releases.
>>> > > -- RC has binding votes required for go head.
>>> > > -- Fix license and notice issue in future releases.
>>> > >
>>> > 
>>> >  I would *strongly* recommend the PMC not take Option 1's course of
>>> >  action. ASF policy on necessary licensing work is very clear.
>>> >  Additionally, if the current LICENSE/NOTICE work is sufficiently
>>> >  inaccurate that it fails to meet the licensing requirements of
>>> bundled
>>> >  works then the PMC will have moved from accidental nonconfo

[jira] [Commented] (PHOENIX-1367) VIEW derived from another VIEW doesn't use parent VIEW indexes

2016-07-19 Thread Cody Marcel (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384548#comment-15384548
 ] 

Cody Marcel commented on PHOENIX-1367:
--

[~jamestaylor] Thanks for digging up that Jira. We'd found the docs on 
limitations already, but didn't see anything prior to this on the write path 
into the index.

> VIEW derived from another VIEW doesn't use parent VIEW indexes
> --
>
> Key: PHOENIX-1367
> URL: https://issues.apache.org/jira/browse/PHOENIX-1367
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: PHOENIX_1367.test.patch
>
>
> If a VIEW has an index and another VIEW is derived from it, the child view 
> will not use the parent view's indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #182: Encodecolumns

2016-07-19 Thread twdsilva
Github user twdsilva closed the pull request at:

https://github.com/apache/phoenix/pull/182


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #181: Encodecolumns

2016-07-19 Thread twdsilva
Github user twdsilva closed the pull request at:

https://github.com/apache/phoenix/pull/181


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (PHOENIX-2738) Support DROP TABLE in Phoenix-Calcite Integration

2016-07-19 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue resolved PHOENIX-2738.
--
Resolution: Fixed
  Assignee: Maryann Xue

> Support DROP TABLE in Phoenix-Calcite Integration
> -
>
> Key: PHOENIX-2738
> URL: https://issues.apache.org/jira/browse/PHOENIX-2738
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Maryann Xue
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2739) Support DROP VIEW in Phoenix-Calcite Integration

2016-07-19 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue resolved PHOENIX-2739.
--
Resolution: Fixed
  Assignee: Maryann Xue

> Support DROP VIEW in Phoenix-Calcite Integration
> 
>
> Key: PHOENIX-2739
> URL: https://issues.apache.org/jira/browse/PHOENIX-2739
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Maryann Xue
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2405) Improve performance and stability of server side sort for ORDER BY

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15384466#comment-15384466
 ] 

ASF GitHub Bot commented on PHOENIX-2405:
-

GitHub user RCheungIT opened a pull request:

https://github.com/apache/phoenix/pull/184

PHOENIX-2405 

https://issues.apache.org/jira/browse/PHOENIX-2405

Hi @maryannxue, I guess this time it may be closer to what you described. 
I think the threshold in DeferredResultIterator should be different from 
the threshold in DeferredByteBufferSegmentQueue, but I don't know where to get 
it.
Also, I don't find a good way to get rid of the offset in Iterator.
Would you mind giving me any suggestions?

Thanks

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/RCheungIT/phoenix PHOENIX-2405-v3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #184


commit 27df6878397b6e6c70c44d288f8d89ba35187880
Author: RCheungIT 
Date:   2016-07-19T15:54:17Z

PHOENIX-2405 Improve performance and stability of server side sort for 
ORDER BY




> Improve performance and stability of server side sort for ORDER BY
> --
>
> Key: PHOENIX-2405
> URL: https://issues.apache.org/jira/browse/PHOENIX-2405
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Haoran Zhang
>  Labels: gsoc2016
> Fix For: 4.9.0
>
>
> We currently use memory mapped files to buffer data as it's being sorted in 
> an ORDER BY (see MappedByteBufferQueue). The following types of exceptions 
> have been seen to occur:
> {code}
> Caused by: java.lang.OutOfMemoryError: Map failed
> at sun.nio.ch.FileChannelImpl.map0(Native Method)
> at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:904)
> {code}
> [~apurtell] has read that memory mapped files are not cleaned up after very 
> well in Java:
> {quote}
> "Map failed" means the JVM ran out of virtual address space. If you search 
> around stack overflow for suggestions on what to do when your app (in this 
> case Phoenix) encounters this issue when using mapped buffers, the answers 
> tend toward manually cleaning up the mapped buffers or explicitly triggering 
> a full GC. See 
> http://stackoverflow.com/questions/8553158/prevent-outofmemory-when-using-java-nio-mappedbytebuffer
>  for example. There are apparently long standing JVM/JRE problems with 
> reclamation of mapped buffers. I think we may want to explore in Phoenix a 
> different way to achieve what the current code is doing.
> {quote}
> Instead of using memory mapped files, we could use heap memory, or perhaps 
> there are other mechanisms too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #184: PHOENIX-2405

2016-07-19 Thread RCheungIT
GitHub user RCheungIT opened a pull request:

https://github.com/apache/phoenix/pull/184

PHOENIX-2405 

https://issues.apache.org/jira/browse/PHOENIX-2405

Hi @maryannxue, I guess this time it may be closer to what you described. 
I think the threshold in DeferredResultIterator should be different from 
the threshold in DeferredByteBufferSegmentQueue, but I don't know where to get 
it.
Also, I don't find a good way to get rid of the offset in Iterator.
Would you mind giving me any suggestions?

Thanks

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/RCheungIT/phoenix PHOENIX-2405-v3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #184


commit 27df6878397b6e6c70c44d288f8d89ba35187880
Author: RCheungIT 
Date:   2016-07-19T15:54:17Z

PHOENIX-2405 Improve performance and stability of server side sort for 
ORDER BY




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [DISCUSS] Licensing in 4.8.0 rc0 (was Fwd: Re: [VOTE] Release of Apache Phoenix 4.8.0-HBase-1.2 RC0)

2016-07-19 Thread Sean Busbey
AFAIK, PMCs can organize their VOTEs as they please. The only
requirement I'm aware of is being able to point at a VOTE that covers
the release. I don't see why a single VOTE that covers multiple git
REFs and multiple artifacts (even in different directories on
dist.apache) would be a problem. I can think of one case where this
was done before (Apache NiFi; I think they were in the incubator at
the time).

Agreed that this kind of process change doesn't need to be blocking.
It's just confusing that right now we can end up with a mixed vote
result across hbase compatibility layers (although I guess that could
be considered a feature if a fatal compability-layer-specific bug were
to show up).

On Tue, Jul 19, 2016 at 1:33 AM, James Taylor  wrote:
> If we could have a single vote, that'd be great, but I didn't think that
> was possible. Would we be voting on the union of all the source codes
> across all four branches? Is it acceptable to be voting on multiple
> hash/tags (since they're in different branches)? What about binary release?
> We'd have multiple tar files, one per branch.
>
> There's a fair amount of automation and process already developed for our
> release procedure. This is the way we've been doing things for the last 10+
> releases (for good or for bad). Unless the new process would be more or
> less the same as the old, I think we need to get 4.8.0 out first (following
> all ASF policies, of course), before changing our documentation,
> automation, etc.
>
> On Tue, Jul 19, 2016 at 8:17 AM, Enis Söztutar  wrote:
>
>> The licensing issues should affect all 4 RCs, so they all should fail or
>> succeed atomically. Having 4.8.0-HBase-0.98 with slightly different content
>> than 4.8.0-HBase-1.1, etc is just asking for trouble.
>>
>> Thinking about this, doing the votes together makes sense. Otherwise, we
>> might end up with 4.8.0 meaning a different thing for different hbase
>> versions.
>>
>> Enis
>>
>> On Mon, Jul 18, 2016 at 10:34 PM, Sean Busbey  wrote:
>>
>> > Am I reading the tallies correctly?
>> >
>> > 0.98: pass with four +1s
>> > 1.0: pass with four +1s
>> > 1.1: fail with two +1s
>> > 1.2: pass with three +1s, one -1, and one non-binding -1
>> >
>> > This presumes I did not miss a vote cancellation from a release manager
>> > (which I've done in the past, tbf).
>> >
>> > As an aside, could we do these as a single vote in the future?
>> >
>> > --
>> > Sean Busbey
>> > On Jul 18, 2016 17:47, "Josh Elser"  wrote:
>> >
>> > > Thanks for the response, Andrew!
>> > >
>> > > I've started knocking out the source-release issues. Will put up a
>> patch
>> > > with how far I get tonight.
>> > >
>> > > Andrew Purtell wrote:
>> > >
>> > >> With PMC hat on I am -1 releasing with known policy violations. This
>> is
>> > >> the same position I took when it was HBase releases at issue. Option 1
>> > is
>> > >> not a good option. Let's go with another.
>> > >>
>> > >>
>> > >> On Jul 18, 2016, at 1:53 PM, Josh Elser  wrote:
>> > >>>
>> > >>> (Moving this over to its own thread to avoid bogging down the VOTE
>> > >>> further)
>> > >>>
>> > >>> PMC, what say you? I have cycles to work on this now.
>> > >>>
>> > >>>  Original Message 
>> > >>> Subject: Re: [VOTE] Release of Apache Phoenix 4.8.0-HBase-1.2 RC0
>> > >>> Date: Mon, 18 Jul 2016 14:43:54 -0400
>> > >>> From: Josh Elser
>> > >>> To: dev@phoenix.apache.org
>> > >>>
>> > >>> Sean Busbey wrote:
>> > >>>
>> >  On Mon, Jul 18, 2016 at 12:05 PM, Ankit Singhal
>> > wrote:
>> > 
>> > > Now we have three options to go forward with 4.8 release (or
>> whether
>> > to
>> > > include licenses and notices for the dependency used now or
>> later):-
>> > >
>> > > *Option 1:- Go with this RC0 for 4.8 release.*
>> > > -- As the build is functionally good and stable.
>> > > -- It has been delayed already and there are some project
>> > > which are
>> > > relying on this(as 4.8 works with HBase 1.2)
>> > > -- We have been releasing like this from past few releases.
>> > > -- RC has binding votes required for go head.
>> > > -- Fix license and notice issue in future releases.
>> > >
>> > 
>> >  I would *strongly* recommend the PMC not take Option 1's course of
>> >  action. ASF policy on necessary licensing work is very clear.
>> >  Additionally, if the current LICENSE/NOTICE work is sufficiently
>> >  inaccurate that it fails to meet the licensing requirements of
>> bundled
>> >  works then the PMC will have moved from accidental nonconformance in
>> >  prior releases to knowingly violating the licenses of those works in
>> >  this release. Reading the JIRAs that Josh was helpful enough to
>> file,
>> >  it sounds like the current artifacts would in fact violate the
>> >  licenses of bundled works.
>> > 
>> > >>> In case my opinions weren't already brutally clear: the issue is not
>> > the
>> > >>> function

[jira] [Comment Edited] (PHOENIX-2161) Can't change timeout

2016-07-19 Thread Murtaza Kanchwala (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383898#comment-15383898
 ] 

Murtaza Kanchwala edited comment on PHOENIX-2161 at 7/19/16 10:14 AM:
--

Changed the following properties still didn't worked, in every hbase-site.xml 
Master, RegionServer's etc


  hbase.zookeeper.property.maxClientCnxns
  1


  phoenix.query.timeoutMs
  360


  phoenix.query.keepAliveMs
  360


  phoenix.query.threadPoolSize
  256


  phoenix.query.queueSize
  1


  zookeeper.recovery.retry
  10


  zookeeper.recovery.retry.intervalmill
  360


  hbase.zookeeper.recoverable.waittime
  360


  zookeeper.session.timeout
  360


  hbase.rpc.timeout
  360


  hbase.client.retries.number
  10


  hbase.client.rpc.maxattempts
  10


  hbase.client.operation.timeout
  360


  hbase.rpc.shortoperation.timeout
  360


  ipc.client.connect.timeout
  360


  mapreduce.task.timeout
  360


  hbase.client.scanner.timeout.period
  360


  hbase.cells.scanned.per.heartbeat.check
  360



was (Author: mkanchwala):
Changed the following properties still didn't worked


  hbase.zookeeper.property.maxClientCnxns
  1


  phoenix.query.timeoutMs
  360


  phoenix.query.keepAliveMs
  360


  phoenix.query.threadPoolSize
  256


  phoenix.query.queueSize
  1


  zookeeper.recovery.retry
  10


  zookeeper.recovery.retry.intervalmill
  360


  hbase.zookeeper.recoverable.waittime
  360


  zookeeper.session.timeout
  360


  hbase.rpc.timeout
  360


  hbase.client.retries.number
  10


  hbase.client.rpc.maxattempts
  10


  hbase.client.operation.timeout
  360


  hbase.rpc.shortoperation.timeout
  360


  ipc.client.connect.timeout
  360


  mapreduce.task.timeout
  360


  hbase.client.scanner.timeout.period
  360


  hbase.cells.scanned.per.heartbeat.check
  360


> Can't change timeout
> 
>
> Key: PHOENIX-2161
> URL: https://issues.apache.org/jira/browse/PHOENIX-2161
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Hadoop with Ambari 2.1.0
> Phoenix 4.4.0.2.3
> HBase 1.1.1.2.3
> HDFS 2.7.1.2.3
> Zookeeper 3.4.6.2.3
>Reporter: Adrià V.
>  Labels: hbase, operation, phoenix, timeout
>
> Phoenix or HBase keeps throwing a timeout exception. I have tryed every 
> configuration I could think about to increase it.
> Partial stacktrace:
> {quote}
> Caused by: java.io.IOException: Call to 
> hdp-w-1.c.dks-hadoop.internal/10.240.2.235:16020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, waitTime=60001, 
> operationTimeout=6 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1242)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:213)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> ... 4 more
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, 
> waitTime=60001, operationTimeout=6 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1184)
> ... 13 more
> {quote}
> The Phoenix (hbase-site.xml) properties:
> - phoenix.query.timeoutMs
> - phoenix.query.keepAliveMs
> I've tryed editing HBase config files and also setting config in Ambari with 
> the next keys to increase the timeout with no success:
> - hbase.rpc.timeout
> - dfs.socket.timeout
> - dfs.client.socket-timeout
> - zookeeper.session.timeout
> Full stack trace:
> {quote}
> Error: Encountered exception in sub plan [0] execution. (state=,code=0)
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:157)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(Ph

[jira] [Commented] (PHOENIX-2161) Can't change timeout

2016-07-19 Thread Murtaza Kanchwala (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383898#comment-15383898
 ] 

Murtaza Kanchwala commented on PHOENIX-2161:


Changed the following properties still didn't worked


  hbase.zookeeper.property.maxClientCnxns
  1


  phoenix.query.timeoutMs
  360


  phoenix.query.keepAliveMs
  360


  phoenix.query.threadPoolSize
  256


  phoenix.query.queueSize
  1


  zookeeper.recovery.retry
  10


  zookeeper.recovery.retry.intervalmill
  360


  hbase.zookeeper.recoverable.waittime
  360


  zookeeper.session.timeout
  360


  hbase.rpc.timeout
  360


  hbase.client.retries.number
  10


  hbase.client.rpc.maxattempts
  10


  hbase.client.operation.timeout
  360


  hbase.rpc.shortoperation.timeout
  360


  ipc.client.connect.timeout
  360


  mapreduce.task.timeout
  360


  hbase.client.scanner.timeout.period
  360


  hbase.cells.scanned.per.heartbeat.check
  360


> Can't change timeout
> 
>
> Key: PHOENIX-2161
> URL: https://issues.apache.org/jira/browse/PHOENIX-2161
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
> Environment: Hadoop with Ambari 2.1.0
> Phoenix 4.4.0.2.3
> HBase 1.1.1.2.3
> HDFS 2.7.1.2.3
> Zookeeper 3.4.6.2.3
>Reporter: Adrià V.
>  Labels: hbase, operation, phoenix, timeout
>
> Phoenix or HBase keeps throwing a timeout exception. I have tryed every 
> configuration I could think about to increase it.
> Partial stacktrace:
> {quote}
> Caused by: java.io.IOException: Call to 
> hdp-w-1.c.dks-hadoop.internal/10.240.2.235:16020 failed on local exception: 
> org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, waitTime=60001, 
> operationTimeout=6 expired.
> at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1242)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1210)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:213)
> at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:62)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:369)
> at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:343)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
> ... 4 more
> Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=43, 
> waitTime=60001, operationTimeout=6 expired.
> at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:70)
> at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1184)
> ... 13 more
> {quote}
> The Phoenix (hbase-site.xml) properties:
> - phoenix.query.timeoutMs
> - phoenix.query.keepAliveMs
> I've tryed editing HBase config files and also setting config in Ambari with 
> the next keys to increase the timeout with no success:
> - hbase.rpc.timeout
> - dfs.socket.timeout
> - dfs.client.socket-timeout
> - zookeeper.session.timeout
> Full stack trace:
> {quote}
> Error: Encountered exception in sub plan [0] execution. (state=,code=0)
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:157)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:251)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:241)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:240)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1250)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
> exceptions:
> Mon Aug 03 16:47:06 UTC 2015, null, java.net.SocketTimeoutException: 
> callTimeout=6, callDuration=60303: row '' on table 'hive_post_topics' at 
> region=hive_post_topi

[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383862#comment-15383862
 ] 

Lars Hofhansl commented on PHOENIX-3097:


[~apurtell], FYI.

> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3098) Possible NegativeArraySizeException while scanning local indexes during regions merge

2016-07-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383817#comment-15383817
 ] 

Hadoop QA commented on PHOENIX-3098:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12818767/PHOENIX-3098.patch
  against master branch at commit a6f61cb40c3eb031cd3b8b2192a243709bce37c6.
  ATTACHMENT ID: 12818767

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/453//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/453//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/453//console

This message is automatically generated.

> Possible NegativeArraySizeException while scanning local indexes during 
> regions merge 
> --
>
> Key: PHOENIX-3098
> URL: https://issues.apache.org/jira/browse/PHOENIX-3098
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergio Peleato
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3098.patch
>
>
> While scanning local indexes during regions merge we might end up with 
> NegativeArraySizeException which leads to RS down. The reason for this is 
> some times HBase won't do real seek and considered fake keyvalues(can be scan 
> start row) as seeked kvs. In that case we ended up with this issue when we 
> call peek without seek. So for local indexes we need to enforce seek all the 
> time for scanning local index reference files.
> {noformat}
> 2016-07-15 17:27:04,419 ERROR 
> [B.fifo.QRpcServer.handler=8,queue=2,port=16020] coprocessor.CoprocessorHost: 
> The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NegativeArraySizeException
> java.lang.NegativeArraySizeException
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getNewRowkeyByRegionStartKeyReplacedWithSplitKey(LocalIndexStoreFileScanner.java:242)
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getChangedKey(LocalIndexStoreFileScanner.java:76)
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.peek(LocalIndexStoreFileScanner.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:87)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:71)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.resetKVHeap(StoreScanner.java:378)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:227)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:258)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1284)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1638)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1712)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1677)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1279)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2110)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5568)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2626)
>   at 
> org.apache.hadoop.hbase.region

[jira] [Updated] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3097:
--
Assignee: Tongzhou Wang  (was: chenglei)

> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei reassigned PHOENIX-3097:
-

Assignee: chenglei  (was: Tongzhou Wang)

> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: chenglei
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread Tongzhou Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383766#comment-15383766
 ] 

Tongzhou Wang commented on PHOENIX-3097:


This is great. Our cluster runs 0.98.6. We would love to see the fix. I will 
work on it tomorrow.

> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383761#comment-15383761
 ] 

James Taylor commented on PHOENIX-3097:
---

Thanks for reporting, [~simon_wang]. Yes, your fix sounds fine - I've assigned 
this to you. Seems like a small enough and important enough fix for inclusion 
in 4.8.0 if you can get us a quick fix that's compatible with later 0.98 
releases as well.

> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3097:
--
Fix Version/s: 4.8.0

> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
> Fix For: 4.8.0
>
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3097:
--
Assignee: Tongzhou Wang

> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>Assignee: Tongzhou Wang
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Attachment: PHOENIX-2900_v1.patch

Trying to get a test run in here.

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900.patch, PHOENIX-2900_v1.patch, 
> phoenix-2900.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService

[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Assignee: chenglei

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Assignee: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900.patch, phoenix-2900.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.

[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Fix Version/s: 4.8.0

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Priority: Critical
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2900.patch, phoenix-2900.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcSe

[jira] [Updated] (PHOENIX-2900) Unable to find hash cache once a salted table 's first region has split

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2900:
--
Summary: Unable to find hash cache once a salted table 's first region has 
split  (was: Once a salted table 's first region has been split, the join SQL 
which the salted table as LHS may cause "Could not find hash cache for joinId" 
error,even though we clear the salted table 's TableRegionCache.)

> Unable to find hash cache once a salted table 's first region has split
> ---
>
> Key: PHOENIX-2900
> URL: https://issues.apache.org/jira/browse/PHOENIX-2900
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Phoenix-4.7.0-HBase-0.98,HBase-0.98.6-cdh5.3.2
>Reporter: chenglei
>Priority: Critical
> Fix For: 4.9.0
>
> Attachments: PHOENIX-2900.patch, phoenix-2900.patch
>
>
> When I join a salted table (which has been split after creation) with another 
> table in my business system ,I meet following error,even though I clear the 
> salted table 's  TableRegionCache: 
> {code:borderStyle=solid} 
>  org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.initScanner(TableResultIterator.java:127)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:108)
>   at 
> org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: %�2. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:218)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:201)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1203)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1517)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1556)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1198)
>   

[jira] [Commented] (PHOENIX-3077) Add documentation for APPEND_ONLY_SCHEMA, AUTO_PARTITION_SEQ

2016-07-19 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383751#comment-15383751
 ] 

James Taylor commented on PHOENIX-3077:
---

Looks good, but for AUTO_PARTITION_SEQ can you add one more sentence, something 
like this? 
bq. With this options set, we prevent allocating a sequence in the event that 
the view already exists.

> Add documentation for APPEND_ONLY_SCHEMA, AUTO_PARTITION_SEQ
> 
>
> Key: PHOENIX-3077
> URL: https://issues.apache.org/jira/browse/PHOENIX-3077
> Project: Phoenix
>  Issue Type: Task
>Reporter: prakul agarwal
>Assignee: Thomas D'Silva
>Priority: Minor
> Attachments: params.gif
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2785) Do not store NULLs for immutable tables

2016-07-19 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383745#comment-15383745
 ] 

James Taylor commented on PHOENIX-2785:
---

How about a valid UPSERT (binding some KeyValue columns to null) followed by a 
raw scan to ensure there are no delete markers?

> Do not store NULLs for immutable tables
> ---
>
> Key: PHOENIX-2785
> URL: https://issues.apache.org/jira/browse/PHOENIX-2785
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.7.0
>Reporter: Lars Hofhansl
>Priority: Minor
> Attachments: 2785-v2.txt, 2785.txt
>
>
> Currently we do store Delete markers (or explicit Nulls). For immutable 
> tables that is not necessary. Null is that distinguishable from an absent 
> column.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3042) Using functional index expression in where statement for join query fails.

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3042:
--
Fix Version/s: (was: 4.8.1)
   4.8.0

> Using functional index expression in where statement for join query fails. 
> ---
>
> Key: PHOENIX-3042
> URL: https://issues.apache.org/jira/browse/PHOENIX-3042
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Thomas D'Silva
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3042-V2.patch, PHOENIX-3042-v3.patch, 
> PHOENIX-3042.patch
>
>
> A simple scenario:
> {noformat}
> CREATE TABLE customer_phx ( c_customer_sk varchar primary key, c_first_name 
> varchar, c_last_name varchar );
> UPSERT INTO customer_phx values ( '1', 'David', 'Smith');
> CREATE LOCAL INDEX CUSTINDEX ON customer_phx (c_customer_sk || c_first_name 
> asc) include (c_customer_sk);
> select c.c_customer_sk from  customer_phx c left outer join customer_phx c2 
> on c.c_customer_sk = c2.c_customer_sk where c.c_customer_sk || c.c_first_name 
> = '1David';
> {noformat}
> It fails with an Exception :
> {noformat}
> Error: ERROR 504 (42703): Undefined column. columnName=C_FIRST_NAME 
> (state=42703,code=504)
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=C_FIRST_NAME
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.resolveColumn(WhereCompiler.java:190)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:169)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:156)
>   at 
> org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at 
> org.apache.phoenix.parse.StringConcatParseNode.accept(StringConcatParseNode.java:46)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at 
> org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:45)
>   at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:130)
>   at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:100)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:556)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:324)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:200)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:157)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:404)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:271)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}
> Meanwhile using the same where statement without join works just fine.
> Any ideas [~jamestaylor], [~ramkrishna.s.vasude...@gmail.com] ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1499) VIEW indexes are not kept in sync when parent table updated directly

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1499:
--
Summary: VIEW indexes are not kept in sync when parent table updated 
directly  (was: VIEW indexes are not kept in sync when physical table updated 
directly)

> VIEW indexes are not kept in sync when parent table updated directly
> 
>
> Key: PHOENIX-1499
> URL: https://issues.apache.org/jira/browse/PHOENIX-1499
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>
> If a physical table is updated directly then the indexes on views over the 
> table will not be maintained. The same applies to a view that has child 
> views. All updates should come through the child view, not the base view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1499) VIEW indexes are not kept in sync when physical table updated directly

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1499:
--
Description: If a physical table is updated directly then the indexes on 
views over the table will not be maintained. The same applies to a view that 
has child views. All updates should come through the child view, not the base 
view.  (was: If a physical table is updated directly then the indexes on views 
over the table will not be maintained.)

> VIEW indexes are not kept in sync when physical table updated directly
> --
>
> Key: PHOENIX-1499
> URL: https://issues.apache.org/jira/browse/PHOENIX-1499
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>
> If a physical table is updated directly then the indexes on views over the 
> table will not be maintained. The same applies to a view that has child 
> views. All updates should come through the child view, not the base view.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1367) VIEW derived from another VIEW doesn't use parent VIEW indexes

2016-07-19 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1367.
---
Resolution: Duplicate

See PHOENIX-1499 and http://phoenix.apache.org/views.html#Limitations. Updates 
to views need to be made on the leaf view.

> VIEW derived from another VIEW doesn't use parent VIEW indexes
> --
>
> Key: PHOENIX-1367
> URL: https://issues.apache.org/jira/browse/PHOENIX-1367
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
> Attachments: PHOENIX_1367.test.patch
>
>
> If a VIEW has an index and another VIEW is derived from it, the child view 
> will not use the parent view's indexes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3070) Unnecessary use of UUID.randomUUID()

2016-07-19 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383736#comment-15383736
 ] 

James Taylor commented on PHOENIX-3070:
---

Since we're going to have to roll another RC, I think it's fine to get this 
into 4.8.0.

> Unnecessary use of UUID.randomUUID()
> 
>
> Key: PHOENIX-3070
> URL: https://issues.apache.org/jira/browse/PHOENIX-3070
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Minor
> Fix For: 4.8.1
>
> Attachments: 3070.txt
>
>
> I see {{UUID.randomUUID()}} used all over Phoenix a lot.
> {{randomUUID}} uses {{SecureRandom}} internally, which - on my machine - 
> takes almost 3ms, and that is _per UUID_!
> I don't think we need a UUIDs from a cryptographically sound random number 
> generator.
> We could do {{new UUID(random.nextLong(), random.nextLong())}}, which takes 
> 0.06ms (60us), or even better: {{new 
> UUID(ThreadLocalRandom.current().nextLong(), 
> ThreadLocalRandom.current().nextLong())}}, which takes less than 0.004ms 
> (4us) on my box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383735#comment-15383735
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/183
  
Thanks, @joshelser! Couple of minor comments. Really appreciate your 
efforts here.


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix issue #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread JamesRTaylor
Github user JamesRTaylor commented on the issue:

https://github.com/apache/phoenix/pull/183
  
Thanks, @joshelser! Couple of minor comments. Really appreciate your 
efforts here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383732#comment-15383732
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71286007
  
--- Diff: LICENSE ---
@@ -200,3 +200,90 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+---
+
+This product bundles Sqlline (https://github.com/julianhyde/sqlline)
+which is licensed under the 3-clause BSD license
+
+Copyright (c) 2002,2003,2004,2005,2006,2007 Marc Prud'hommeaux
+Copyright (c) 2004-2010 The Eigenbase Project
+Copyright (c) 2013-2014 Julian Hyde
+All rights reserved.
+
+---
+
+This product bundles portions of AngularJS (https://angularjs.org/) which
--- End diff --

The only reason we bundle this project and the ones below it are for the 
trace UI. If this is going to make it more difficult to do a binary release or 
require more work for the source releases, I think we should just remove the 
trace UI from both source and binary releases.


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71286007
  
--- Diff: LICENSE ---
@@ -200,3 +200,90 @@
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
+
+---
+
+This product bundles Sqlline (https://github.com/julianhyde/sqlline)
+which is licensed under the 3-clause BSD license
+
+Copyright (c) 2002,2003,2004,2005,2006,2007 Marc Prud'hommeaux
+Copyright (c) 2004-2010 The Eigenbase Project
+Copyright (c) 2013-2014 Julian Hyde
+All rights reserved.
+
+---
+
+This product bundles portions of AngularJS (https://angularjs.org/) which
--- End diff --

The only reason we bundle this project and the ones below it are for the 
trace UI. If this is going to make it more difficult to do a binary release or 
require more work for the source releases, I think we should just remove the 
trace UI from both source and binary releases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #183: PHOENIX-3084 source release licensing issues.

2016-07-19 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71285770
  
--- Diff: examples/STOCK_SYMBOL.sql ---
@@ -1,3 +1,19 @@
+-- Licensed to the Apache Software Foundation (ASF) under one
+-- or more contributor license agreements.  See the NOTICE file
+-- distributed with this work for additional information
+-- regarding copyright ownership.  The ASF licenses this file
+-- to you under the Apache License, Version 2.0 (the
+-- "License"); you may not use this file except in compliance
+-- with the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
--- End diff --

There was a sqlline bug which prevented SQL scripts from running where the 
first line was a comment. Would you mind confirming that this has been fixed in 
the version of sqlline we're using? If it's a problem, we can just remove these 
examples as they're not particularly important.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3084) Licensing issues with source release

2016-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383729#comment-15383729
 ] 

ASF GitHub Bot commented on PHOENIX-3084:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/183#discussion_r71285770
  
--- Diff: examples/STOCK_SYMBOL.sql ---
@@ -1,3 +1,19 @@
+-- Licensed to the Apache Software Foundation (ASF) under one
+-- or more contributor license agreements.  See the NOTICE file
+-- distributed with this work for additional information
+-- regarding copyright ownership.  The ASF licenses this file
+-- to you under the Apache License, Version 2.0 (the
+-- "License"); you may not use this file except in compliance
+-- with the License.  You may obtain a copy of the License at
+--
+-- http://www.apache.org/licenses/LICENSE-2.0
+--
+-- Unless required by applicable law or agreed to in writing, software
+-- distributed under the License is distributed on an "AS IS" BASIS,
+-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+-- See the License for the specific language governing permissions and
+-- limitations under the License.
+
--- End diff --

There was a sqlline bug which prevented SQL scripts from running where the 
first line was a comment. Would you mind confirming that this has been fixed in 
the version of sqlline we're using? If it's a problem, we can just remove these 
examples as they're not particularly important.


> Licensing issues with source release
> 
>
> Key: PHOENIX-3084
> URL: https://issues.apache.org/jira/browse/PHOENIX-3084
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>
> On vetting the 4.8.0-HBase-1.2-rc0 source release, i found numerous issues 
> with the licensing of bundled software (the LICENSE and NOTICE files).
> Original post: 
> https://lists.apache.org/thread.html/f887f8213a81881df8e25cf63ab076b019fd46113fb25f8c8a085412@%3Cdev.phoenix.apache.org%3E
> Will let this serve as an umbrella to fix the various issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3098) Possible NegativeArraySizeException while scanning local indexes during regions merge

2016-07-19 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3098:
-
Attachment: PHOENIX-3098.patch

Here is the patch fixes it.
[~enis] [~sergey.soldatov] please review.

> Possible NegativeArraySizeException while scanning local indexes during 
> regions merge 
> --
>
> Key: PHOENIX-3098
> URL: https://issues.apache.org/jira/browse/PHOENIX-3098
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergio Peleato
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3098.patch
>
>
> While scanning local indexes during regions merge we might end up with 
> NegativeArraySizeException which leads to RS down. The reason for this is 
> some times HBase won't do real seek and considered fake keyvalues(can be scan 
> start row) as seeked kvs. In that case we ended up with this issue when we 
> call peek without seek. So for local indexes we need to enforce seek all the 
> time for scanning local index reference files.
> {noformat}
> 2016-07-15 17:27:04,419 ERROR 
> [B.fifo.QRpcServer.handler=8,queue=2,port=16020] coprocessor.CoprocessorHost: 
> The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NegativeArraySizeException
> java.lang.NegativeArraySizeException
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getNewRowkeyByRegionStartKeyReplacedWithSplitKey(LocalIndexStoreFileScanner.java:242)
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getChangedKey(LocalIndexStoreFileScanner.java:76)
>   at 
> org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.peek(LocalIndexStoreFileScanner.java:68)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:87)
>   at 
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:71)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.resetKVHeap(StoreScanner.java:378)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:227)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:259)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:258)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1284)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1638)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1712)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1677)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1279)
>   at 
> org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2110)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5568)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2626)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2612)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2594)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2271)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Thanks [~speleato] for finding this issue. Added you as reporter for this 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3098) Possible NegativeArraySizeException while scanning local indexes during regions merge

2016-07-19 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3098:
-
Description: 
While scanning local indexes during regions merge we might end up with 
NegativeArraySizeException which leads to RS down. The reason for this is some 
times HBase won't do real seek and considered fake keyvalues(can be scan start 
row) as seeked kvs. In that case we ended up with this issue when we call peek 
without seek. So for local indexes we need to enforce seek all the time for 
scanning local index reference files.
{noformat}
2016-07-15 17:27:04,419 ERROR [B.fifo.QRpcServer.handler=8,queue=2,port=16020] 
coprocessor.CoprocessorHost: The coprocessor 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
java.lang.NegativeArraySizeException
java.lang.NegativeArraySizeException
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getNewRowkeyByRegionStartKeyReplacedWithSplitKey(LocalIndexStoreFileScanner.java:242)
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getChangedKey(LocalIndexStoreFileScanner.java:76)
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.peek(LocalIndexStoreFileScanner.java:68)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:87)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:71)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.resetKVHeap(StoreScanner.java:378)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:227)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:259)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:258)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1284)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1638)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1712)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1677)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1279)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2110)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5568)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2626)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2612)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2594)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2271)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Thanks [~speleato] for finding this issue. Added you as reporter for this issue.

  was:
While scanning local indexes during regions merge we might end up with 
NegativeArraySizeException which leads to RS down. The reason for this is some 
times HBase won't do real seek and considered fake keyvalues(can be scan start 
row) as seeked kvs. In that case we ended up with this issue when we call peek 
without seek. So for local indexes we need to enforce seek all the time for 
scanning local index reference files.
{noformat}
2016-07-15 17:27:04,419 ERROR [B.fifo.QRpcServer.handler=8,queue=2,port=16020] 
coprocessor.CoprocessorHost: The coprocessor 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
java.lang.NegativeArraySizeException
java.lang.NegativeArraySizeException
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getNewRowkeyByRegionStartKeyReplacedWithSplitKey(LocalIndexStoreFileScanner.java:242)
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getChangedKey(LocalIndexStoreFileScanner.java:76)
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.peek(LocalIndexStoreFileScanner.java:68)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:87)
at 
org.apache.hadoop.hbase.regionserver.KeyV

[jira] [Created] (PHOENIX-3098) Possible NegativeArraySizeException while scanning local indexes during regions merge

2016-07-19 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-3098:


 Summary: Possible NegativeArraySizeException while scanning local 
indexes during regions merge 
 Key: PHOENIX-3098
 URL: https://issues.apache.org/jira/browse/PHOENIX-3098
 Project: Phoenix
  Issue Type: Bug
Reporter: Sergio Peleato
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.8.0


While scanning local indexes during regions merge we might end up with 
NegativeArraySizeException which leads to RS down. The reason for this is some 
times HBase won't do real seek and considered fake keyvalues(can be scan start 
row) as seeked kvs. In that case we ended up with this issue when we call peek 
without seek. So for local indexes we need to enforce seek all the time for 
scanning local index reference files.
{noformat}
2016-07-15 17:27:04,419 ERROR [B.fifo.QRpcServer.handler=8,queue=2,port=16020] 
coprocessor.CoprocessorHost: The coprocessor 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
java.lang.NegativeArraySizeException
java.lang.NegativeArraySizeException
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getNewRowkeyByRegionStartKeyReplacedWithSplitKey(LocalIndexStoreFileScanner.java:242)
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.getChangedKey(LocalIndexStoreFileScanner.java:76)
at 
org.apache.hadoop.hbase.regionserver.LocalIndexStoreFileScanner.peek(LocalIndexStoreFileScanner.java:68)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:87)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.(KeyValueHeap.java:71)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.resetKVHeap(StoreScanner.java:378)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:227)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:259)
at 
org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:258)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1284)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1638)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1712)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1677)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1279)
at 
org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2110)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5568)
at 
org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2626)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2612)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2594)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2271)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread Tongzhou Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15383704#comment-15383704
 ] 

Tongzhou Wang commented on PHOENIX-3097:


Hi all, 

this seems an easy fix to me. Does my proposed fix work? If it looks good, I 
would like to implement. 

Thanks,
Tongzhou

> Incompatibilities with HBase 0.98.6
> ---
>
> Key: PHOENIX-3097
> URL: https://issues.apache.org/jira/browse/PHOENIX-3097
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Tongzhou Wang
>
> Two places in the 0.98 code base are not compatible with HBase 0.98.6.
> 1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced 
> by `env.getRegion().getRegionInfo()`.
> 2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
> {UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3097) Incompatibilities with HBase 0.98.6

2016-07-19 Thread Tongzhou Wang (JIRA)
Tongzhou Wang created PHOENIX-3097:
--

 Summary: Incompatibilities with HBase 0.98.6
 Key: PHOENIX-3097
 URL: https://issues.apache.org/jira/browse/PHOENIX-3097
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0
Reporter: Tongzhou Wang


Two places in the 0.98 code base are not compatible with HBase 0.98.6.

1. calls to `RegionCoprocessorEnvironment.getRegionInfo()`. Can be replaced by 
`env.getRegion().getRegionInfo()`.
2. calls to `User.runAsLoginUser()`. Can be replaced by `try 
{UserGroupInformation.getLoginUser().doAs()} catch ...`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)