[jira] [Commented] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891664#comment-15891664
 ] 

James Taylor commented on PHOENIX-3702:
---

+1 for the patch. Are there implications for the renew lease feature? Do we 
still need it? What about Mujtaba's discovery that it doesn't work for big 
aggregate queries? Any thing we can do about that?

> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3702.patch
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:202)
>   at 
> 

[jira] [Updated] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-3702:
--
Attachment: PHOENIX-3702.patch

I looked at the test closely and I think what it is trying to test is that 
lease renewal is *not* required for aggregate queries like count ( * ) when the 
work is done in preScannerNext() hook of co-processor. [~jamestaylor], is my 
understanding correct? 

I have modified the test to override the hbase.client.scanner.timeout server 
side config. The test now passes.

[~jamestaylor], please review.

> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3702.patch
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 

[jira] [Commented] (PHOENIX-3536) Remove creating unnecessary phoenix connections in MR Tasks of Hive

2017-03-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891604#comment-15891604
 ] 

James Taylor commented on PHOENIX-3536:
---

[~Jeongdae Kim] - thanks for all the contributions. Would you be interested in 
reworking your patch a little bit to address our concerns?

> Remove creating unnecessary phoenix connections in MR Tasks of Hive
> ---
>
> Key: PHOENIX-3536
> URL: https://issues.apache.org/jira/browse/PHOENIX-3536
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>  Labels: HivePhoenix
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3536.1.patch
>
>
> PhoenixStorageHandler creates phoenix connections to make QueryPlan in 
> getSplit phase(prepare MR) and getRecordReader phase(Map) while running MR 
> Job.
> in phoenix, it spends too many times to create the first phoenix 
> connection(QueryServices) for specific URL. (checking and loading phoenix 
> schema information)
> i found it is possible to remove creating query plan again in Map 
> phase(getRecordReader()) by serializing QueryPlan created from Input format 
> ans passing this plan to record reader. 
>  this approach improves scan performance by removing trying to unnecessary 
> connection in map phase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3536) Remove creating unnecessary phoenix connections in MR Tasks of Hive

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3536:
--
Fix Version/s: 4.11.0

> Remove creating unnecessary phoenix connections in MR Tasks of Hive
> ---
>
> Key: PHOENIX-3536
> URL: https://issues.apache.org/jira/browse/PHOENIX-3536
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>  Labels: HivePhoenix
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3536.1.patch
>
>
> PhoenixStorageHandler creates phoenix connections to make QueryPlan in 
> getSplit phase(prepare MR) and getRecordReader phase(Map) while running MR 
> Job.
> in phoenix, it spends too many times to create the first phoenix 
> connection(QueryServices) for specific URL. (checking and loading phoenix 
> schema information)
> i found it is possible to remove creating query plan again in Map 
> phase(getRecordReader()) by serializing QueryPlan created from Input format 
> ans passing this plan to record reader. 
>  this approach improves scan performance by removing trying to unnecessary 
> connection in map phase.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3391) Supporting Hive 2.1.0 in PhoenixStorageHandler

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3391:
--
Fix Version/s: 4.10.0

> Supporting Hive 2.1.0 in PhoenixStorageHandler
> --
>
> Key: PHOENIX-3391
> URL: https://issues.apache.org/jira/browse/PHOENIX-3391
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>Priority: Minor
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3391.2.patch, PHOENIX-3391.patch
>
>
> Hive with PhoenixStorageHandler throws TException when executing select 
> statement as following. the reason is that some hive public interface is 
> changed (ColumnProjectionUtils.getReadColumnNames()), so hive throws 
> NoSuchMethodError in PhoenixInputFormat class.
> org.apache.thrift.transport.TTransportException
>   at 
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
>   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>   at 
> org.apache.thrift.transport.TSaslTransport.readLength(TSaslTransport.java:376)
>   at 
> org.apache.thrift.transport.TSaslTransport.readFrame(TSaslTransport.java:453)
>   at 
> org.apache.thrift.transport.TSaslTransport.read(TSaslTransport.java:435)
>   at 
> org.apache.thrift.transport.TSaslClientTransport.read(TSaslClientTransport.java:37)
>   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
>   at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.recv_FetchResults(TCLIService.java:559)
>   at 
> org.apache.hive.service.rpc.thrift.TCLIService$Client.FetchResults(TCLIService.java:546)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.apache.hive.jdbc.HiveConnection$SynchronizedHandler.invoke(HiveConnection.java:1426)
>   at com.sun.proxy.$Proxy16.FetchResults(Unknown Source)
>   at 
> org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:372)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3503) PhoenixStorageHandler doesn't work properly when execution engine of Hive is Tez.

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3503:
--
Fix Version/s: 4.10.0

> PhoenixStorageHandler doesn't  work properly when execution engine of Hive is 
> Tez.
> --
>
> Key: PHOENIX-3503
> URL: https://issues.apache.org/jira/browse/PHOENIX-3503
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3503.patch
>
>
> Hive storage handler can't parse some column types that have 
> parameters(length, precision, scale...) from serdeConstants.LIST_COLUMN_TYPES 
> correctly, when execution engine of Hive is Tez.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3486) RoundRobinResultIterator doesn't work correctly because of setting Scan's cache size inappropriately in PhoenixInputForamt

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3486:
--
Fix Version/s: 4.10.0

> RoundRobinResultIterator doesn't work correctly because of setting Scan's 
> cache size inappropriately in PhoenixInputForamt
> --
>
> Key: PHOENIX-3486
> URL: https://issues.apache.org/jira/browse/PHOENIX-3486
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3486.patch
>
>
> RoundRobinResultIterator uses "hbase.client.scanner.caching" to fill caches  
> in parallel for all scans, but by setting Scan.setCaching() in 
> PhoenixInputForrmat(phoenix-hive), RoundRobinResultIterator doesn't work 
> correctly, because if Scan have cache size by setCaching(), HBase uses cache 
> size from Scan.getCaching() to fill cache, not 
> "hbase.client.scanner.caching", and RoundRobinResultIterator scans the table 
> in parallel to fill caches every "hbase.client.scanner.caching", resulting in 
> unintended parallel scan operation,  this causes scan performance degradation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3512) PhoenixStorageHandler makes erroneous query string when handling between clauses with date constants.

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3512:
--
Fix Version/s: 4.10.0

> PhoenixStorageHandler makes erroneous query string when handling between 
> clauses with date constants.
> -
>
> Key: PHOENIX-3512
> URL: https://issues.apache.org/jira/browse/PHOENIX-3512
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Jeongdae Kim
>Assignee: Jeongdae Kim
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3512.2.patch, PHOENIX-3512.patch
>
>
> ex) l_shipdate BETWEEN '1992-01-02' AND '1992-02-02' --> l_shipdate between 
> to_date('69427800') and to_date('69695640')



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3585) MutableIndexIT testSplitDuringIndexScan and testIndexHalfStoreFileReader fail for transactional tables and local indexes

2017-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891559#comment-15891559
 ] 

Hudson commented on PHOENIX-3585:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1573 (See 
[https://builds.apache.org/job/Phoenix-master/1573/])
PHOENIX-3585 MutableIndexIT testSplitDuringIndexScan and (thomas: rev 
1e2a9675c68f2ea52cf0d7fd3dc6dcff585b02cd)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java


> MutableIndexIT testSplitDuringIndexScan and testIndexHalfStoreFileReader fail 
> for transactional tables and local indexes
> 
>
> Key: PHOENIX-3585
> URL: https://issues.apache.org/jira/browse/PHOENIX-3585
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: diff.patch
>
>
> the tests fail if we use HDFSTransactionStateStorage instead of  
> InMemoryTransactionStateStorage when we create the TransactionManager in 
> BaseTest



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891527#comment-15891527
 ] 

James Taylor commented on PHOENIX-3705:
---

Thanks, [~comnetwork]. Let me know what you find out about when a row value 
constructor is used. I'm not sure yet what the right thing to do is there. 
Really appreciate you digging into this - you're doing a great job!

> SkipScanFilter may repeatedly copy rowKey Columns to startKey
> -
>
> Key: PHOENIX-3705
> URL: https://issues.apache.org/jira/browse/PHOENIX-3705
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: chenglei
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3705_v1.patch
>
>
> See following simple unit test first,the rowKey is composed of three PInteger 
> columns,and the slots of SkipScanFilter are:
> [ [[1 - 4]], [5, 7], [[9 - 10]] ],
> When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose 
> rowKey is [2,7,11], obviously SkipScanFilter.filterKeyValue
> returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
> returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
> returns  [2,8,5,9] , a very strange value, the unit tests failed.
> {code} 
> @Test
> public void testNavigate() {
> RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
> for(int i=0;i<3;i++) {
> builder.addField(
> new PDatum() {
> @Override
> public boolean isNullable() {
> return false;
> }
> @Override
> public PDataType getDataType() {
> return PInteger.INSTANCE;
> }
> @Override
> public Integer getMaxLength() {
> return PInteger.INSTANCE.getMaxLength(null);
> }
> @Override
> public Integer getScale() {
> return PInteger.INSTANCE.getScale(null);
> }
> @Override
> public SortOrder getSortOrder() {
> return SortOrder.getDefault();
> }
> }, false, SortOrder.getDefault());
> }
> 
> List rowKeyColumnRangesList=Arrays.asList(  
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), true, 
> PInteger.INSTANCE.toBytes(4), true)),
> Arrays.asList(
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), true, 
> PInteger.INSTANCE.toBytes(10), true))
> );
> 
> SkipScanFilter skipScanFilter=new 
> SkipScanFilter(rowKeyColumnRangesList, builder.build());
> 
> System.out.println(skipScanFilter);
> 
> byte[] rowKey=ByteUtil.concat(
> PInteger.INSTANCE.toBytes(2), 
> PInteger.INSTANCE.toBytes(7),
> PInteger.INSTANCE.toBytes(11));
> KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
> ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
> assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
> Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
> 
> assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
> "\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
> }
> {code}
> Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
> SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
> second column 7, which match the second range [7] of SkipScanFilter's second 
> slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
> bigger than the third slot range [9 - 10],so position[2] is 0 and the 
> {{SkipScanFilter.ptr}} which points to current column still stay on the third 
> column. Now we begin to backtrack to second column, because the second range 
> [7] of SkipScanFilter's second slot is singleKey and there is no more 
> range,so position[1] is 0 and we continue to backtrack to first column, 
> because the first slot range [1-4] is not singleKey so we stop backtracking 
> at first column.
> Now the problem comes, in following line 448 of {{SkipScanFilter.navigate}} 
> 

[jira] [Commented] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891484#comment-15891484
 ] 

chenglei commented on PHOENIX-3705:
---

Thanks for review, [~jamestaylor],I will add more tests and modify following 
your suggestion.

> SkipScanFilter may repeatedly copy rowKey Columns to startKey
> -
>
> Key: PHOENIX-3705
> URL: https://issues.apache.org/jira/browse/PHOENIX-3705
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: chenglei
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3705_v1.patch
>
>
> See following simple unit test first,the rowKey is composed of three PInteger 
> columns,and the slots of SkipScanFilter are:
> [ [[1 - 4]], [5, 7], [[9 - 10]] ],
> When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose 
> rowKey is [2,7,11], obviously SkipScanFilter.filterKeyValue
> returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
> returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
> returns  [2,8,5,9] , a very strange value, the unit tests failed.
> {code} 
> @Test
> public void testNavigate() {
> RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
> for(int i=0;i<3;i++) {
> builder.addField(
> new PDatum() {
> @Override
> public boolean isNullable() {
> return false;
> }
> @Override
> public PDataType getDataType() {
> return PInteger.INSTANCE;
> }
> @Override
> public Integer getMaxLength() {
> return PInteger.INSTANCE.getMaxLength(null);
> }
> @Override
> public Integer getScale() {
> return PInteger.INSTANCE.getScale(null);
> }
> @Override
> public SortOrder getSortOrder() {
> return SortOrder.getDefault();
> }
> }, false, SortOrder.getDefault());
> }
> 
> List rowKeyColumnRangesList=Arrays.asList(  
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), true, 
> PInteger.INSTANCE.toBytes(4), true)),
> Arrays.asList(
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), true, 
> PInteger.INSTANCE.toBytes(10), true))
> );
> 
> SkipScanFilter skipScanFilter=new 
> SkipScanFilter(rowKeyColumnRangesList, builder.build());
> 
> System.out.println(skipScanFilter);
> 
> byte[] rowKey=ByteUtil.concat(
> PInteger.INSTANCE.toBytes(2), 
> PInteger.INSTANCE.toBytes(7),
> PInteger.INSTANCE.toBytes(11));
> KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
> ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
> assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
> Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
> 
> assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
> "\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
> }
> {code}
> Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
> SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
> second column 7, which match the second range [7] of SkipScanFilter's second 
> slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
> bigger than the third slot range [9 - 10],so position[2] is 0 and the 
> {{SkipScanFilter.ptr}} which points to current column still stay on the third 
> column. Now we begin to backtrack to second column, because the second range 
> [7] of SkipScanFilter's second slot is singleKey and there is no more 
> range,so position[1] is 0 and we continue to backtrack to first column, 
> because the first slot range [1-4] is not singleKey so we stop backtracking 
> at first column.
> Now the problem comes, in following line 448 of {{SkipScanFilter.navigate}} 
> method,{{SkipScanFilter.setStartKey}} method is invoked,first copy rowKey 
> columns before {{SkipScanFilter.ptr}} to {{SkipScanFilter.startKey}}, because 

[jira] [Updated] (PHOENIX-3562) NPE from BaseTest#deletePriorSchemas() due to missing schema

2017-03-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated PHOENIX-3562:

Description: 
I was running SaltedIndexIT where I saw:
{code}
2376) testPartiallyQualifiedRVCInList[](org.apache.phoenix.end2end.QueryIT)
java.lang.NullPointerException
  at org.apache.phoenix.query.BaseTest.deletePriorSchemas(BaseTest.java:905)
  at org.apache.phoenix.query.BaseTest.deletePriorMetaData(BaseTest.java:938)
  at 
org.apache.phoenix.end2end.BaseClientManagedTimeIT.cleanUpAfterTest(BaseClientManagedTimeIT.java:59)
{code}
Here is related code:
{code}
String schemaName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
if 
(schemaName.equals(PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME)) {
{code}

I checked the code in master branch. The issue is there as well.

  was:
I was running SaltedIndexIT where I saw:

{code}
2376) testPartiallyQualifiedRVCInList[](org.apache.phoenix.end2end.QueryIT)
java.lang.NullPointerException
  at org.apache.phoenix.query.BaseTest.deletePriorSchemas(BaseTest.java:905)
  at org.apache.phoenix.query.BaseTest.deletePriorMetaData(BaseTest.java:938)
  at 
org.apache.phoenix.end2end.BaseClientManagedTimeIT.cleanUpAfterTest(BaseClientManagedTimeIT.java:59)
{code}
Here is related code:
{code}
String schemaName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
if 
(schemaName.equals(PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME)) {
{code}

I checked the code in master branch. The issue is there as well.


> NPE from BaseTest#deletePriorSchemas() due to missing schema
> 
>
> Key: PHOENIX-3562
> URL: https://issues.apache.org/jira/browse/PHOENIX-3562
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.7.0
>Reporter: Ted Yu
>Assignee: Kevin Liew
>Priority: Minor
>
> I was running SaltedIndexIT where I saw:
> {code}
> 2376) testPartiallyQualifiedRVCInList[](org.apache.phoenix.end2end.QueryIT)
> java.lang.NullPointerException
>   at org.apache.phoenix.query.BaseTest.deletePriorSchemas(BaseTest.java:905)
>   at org.apache.phoenix.query.BaseTest.deletePriorMetaData(BaseTest.java:938)
>   at 
> org.apache.phoenix.end2end.BaseClientManagedTimeIT.cleanUpAfterTest(BaseClientManagedTimeIT.java:59)
> {code}
> Here is related code:
> {code}
> String schemaName = 
> rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
> if 
> (schemaName.equals(PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME)) {
> {code}
> I checked the code in master branch. The issue is there as well.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3571) Potential divide by zero exception in LongDivideExpression

2017-03-01 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated PHOENIX-3571:

Description: 
Running SaltedIndexIT, I saw the following:
{code}
===> 
testExpressionThrowsException(org.apache.phoenix.end2end.index.IndexExpressionIT)
 starts
2017-01-05 19:42:48,992 INFO  [main] client.HBaseAdmin: Created I
2017-01-05 19:42:48,996 INFO  [main] schema.MetaDataClient: Created index I at 
1483645369000
2017-01-05 19:42:49,066 WARN  [hconnection-0x5a45c218-shared--pool52-t6] 
client.AsyncProcess: #38, table=T, attempt=1/35 failed=1ops, last exception: 
org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: 
org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed to 
build index for unexpected reason!
  at 
org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:183)
  at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:204)
  at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:974)
  at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
  at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
  at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1692)
  at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:970)
  at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3218)
  at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2984)
  at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2926)
  at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:718)
  at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:680)
  at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2065)
  at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32393)
  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:238)
  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:218)
Caused by: java.lang.ArithmeticException: / by zero
  at 
org.apache.phoenix.expression.LongDivideExpression.evaluate(LongDivideExpression.java:50)
  at 
org.apache.phoenix.index.IndexMaintainer.buildRowKey(IndexMaintainer.java:521)
  at 
org.apache.phoenix.index.IndexMaintainer.buildUpdateMutation(IndexMaintainer.java:859)
  at 
org.apache.phoenix.index.PhoenixIndexCodec.getIndexUpserts(PhoenixIndexCodec.java:76)
  at 
org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addCurrentStateMutationsForBatch(NonTxIndexBuilder.java:288)
  at 
org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addUpdateForGivenTimestamp(NonTxIndexBuilder.java:256)
  at 
org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addMutationsForBatch(NonTxIndexBuilder.java:222)
  at 
org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.batchMutationAndAddUpdates(NonTxIndexBuilder.java:109)
  at 
org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.getIndexUpdate(NonTxIndexBuilder.java:71)
  at 
org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:136)
  at 
org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:132)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at 
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:253)
  at 
com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
  at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:58)
  at 
org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
  at 
org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:143)
  at 
org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:273)
  at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:201)
  ... 16 more
{code}
Better handling of divide by zero should be provided.

  was:
Running SaltedIndexIT, I saw the following:

{code}
===> 
testExpressionThrowsException(org.apache.phoenix.end2end.index.IndexExpressionIT)
 starts
2017-01-05 19:42:48,992 INFO  [main] client.HBaseAdmin: Created I
2017-01-05 19:42:48,996 INFO  [main] schema.MetaDataClient: Created index I at 
1483645369000
2017-01-05 19:42:49,066 WARN  [hconnection-0x5a45c218-shared--pool52-t6] 

[jira] [Commented] (PHOENIX-3346) Hive PhoenixStorageHandler doesn't work well with column mapping

2017-03-01 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891341#comment-15891341
 ] 

Sergey Soldatov commented on PHOENIX-3346:
--

[~samarth.j...@gmail.com] yep, just committed a missing fix for UT.

> Hive PhoenixStorageHandler doesn't work well with column mapping
> 
>
> Key: PHOENIX-3346
> URL: https://issues.apache.org/jira/browse/PHOENIX-3346
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3346-1.patch
>
>
> If column mapping is used during table creation, the hive table becomes  
> unusable and throws UnknownColumn exception.
> There are several issues in the current implementation:
> 1. During table creation mapping doesn't applies to primary keys
> 2. During select query building no mapping happen
> 3. PhoenixRow should have backward mapping from phoenix column names to hive 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3346) Hive PhoenixStorageHandler doesn't work well with column mapping

2017-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891329#comment-15891329
 ] 

Hudson commented on PHOENIX-3346:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1572 (See 
[https://builds.apache.org/job/Phoenix-master/1572/])
PHOENIX-3346 Hive PhoenixStorageHandler doesn't work well with column (ssa: rev 
7201dd5e17096209d26ca3620054fc72665cf4fe)
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixSerializer.java
* (add) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/util/ColumnMappingUtils.java
* (edit) phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTestUtil.java
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
* (add) phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTezIT.java
* (edit) 
phoenix-hive/src/test/java/org/apache/phoenix/hive/query/PhoenixQueryBuilderTest.java
* (edit) phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixSerDe.java
* (edit) phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixMetaHook.java
* (add) phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveMapReduceIT.java
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/util/PhoenixConnectionUtil.java
* (edit) 
phoenix-hive/src/it/java/org/apache/phoenix/hive/HivePhoenixStoreIT.java
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixInputFormat.java
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixResultWritable.java
* (add) 
phoenix-hive/src/it/java/org/apache/phoenix/hive/BaseHivePhoenixStoreIT.java
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/query/PhoenixQueryBuilder.java
* (edit) phoenix-hive/pom.xml
* (edit) 
phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixRecordReader.java


> Hive PhoenixStorageHandler doesn't work well with column mapping
> 
>
> Key: PHOENIX-3346
> URL: https://issues.apache.org/jira/browse/PHOENIX-3346
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3346-1.patch
>
>
> If column mapping is used during table creation, the hive table becomes  
> unusable and throws UnknownColumn exception.
> There are several issues in the current implementation:
> 1. During table creation mapping doesn't applies to primary keys
> 2. During select query building no mapping happen
> 3. PhoenixRow should have backward mapping from phoenix column names to hive 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3346) Hive PhoenixStorageHandler doesn't work well with column mapping

2017-03-01 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891299#comment-15891299
 ] 

Samarth Jain commented on PHOENIX-3346:
---

[~sergey.soldatov] - looks like the checkin broke tests in the hive module. 

https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1461/console

Can you please take a look?

> Hive PhoenixStorageHandler doesn't work well with column mapping
> 
>
> Key: PHOENIX-3346
> URL: https://issues.apache.org/jira/browse/PHOENIX-3346
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3346-1.patch
>
>
> If column mapping is used during table creation, the hive table becomes  
> unusable and throws UnknownColumn exception.
> There are several issues in the current implementation:
> 1. During table creation mapping doesn't applies to primary keys
> 2. During select query building no mapping happen
> 3. PhoenixRow should have backward mapping from phoenix column names to hive 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891274#comment-15891274
 ] 

Andrew Purtell commented on PHOENIX-3702:
-

Copying over my comments from HBASE-17714

[~samarthjain] Are you sure RenewLeaseIT actually renews the lease or allows 
for a client heartbeat to happen before the RPC times out? The test sets a very 
short RPC timeout (2000ms) but makes no other configuration changes

The release notes on HBASE-13090 say

{quote}
To ensure that timeout checks do not occur too often (which would hurt the 
performance of scans), the configuration 
"hbase.cells.scanned.per.heartbeat.check" has been introduced. This 
configuration controls how often System.currentTimeMillis() is called to update 
the progress towards the time limit. Currently, the default value of this 
configuration value is 1.
{quote}


> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> 

[jira] [Resolved] (PHOENIX-3585) MutableIndexIT testSplitDuringIndexScan and testIndexHalfStoreFileReader fail for transactional tables and local indexes

2017-03-01 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-3585.
-
Resolution: Fixed

> MutableIndexIT testSplitDuringIndexScan and testIndexHalfStoreFileReader fail 
> for transactional tables and local indexes
> 
>
> Key: PHOENIX-3585
> URL: https://issues.apache.org/jira/browse/PHOENIX-3585
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: diff.patch
>
>
> the tests fail if we use HDFSTransactionStateStorage instead of  
> InMemoryTransactionStateStorage when we create the TransactionManager in 
> BaseTest



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3248) Enable HBase server-side scan metrics to be returned to client and surfaced through metrics

2017-03-01 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15891159#comment-15891159
 ] 

Samarth Jain commented on PHOENIX-3248:
---

Patch looks great, [~karanmehta93]! Nice job!

Please file an HBase JIRA to see formalize the scan metric names. We are 
currently relying on hard coded strings in HBase which isn't ideal. I would 
ideally like to get the HBase Jira in first and then tweak this patch to use 
those enums.

> Enable HBase server-side scan metrics to be returned to client and surfaced 
> through metrics
> ---
>
> Key: PHOENIX-3248
> URL: https://issues.apache.org/jira/browse/PHOENIX-3248
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Karan Mehta
> Attachments: PHOENIX-3248.patch
>
>
> We collect many client-side metrics[1] for Phoenix statements. We should 
> enable returning the more detailed server-side HBase scan metrics (through 
> Scan.setScanMetricsEnabled(true) and Scan.getScanMetrics()), and then 
> incorporate these into our client-side metrics.
> [1] https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890999#comment-15890999
 ] 

Andrew Purtell commented on PHOENIX-3702:
-

bq. How about mvn -Dtest=RenewLeaseIT test

That's better. Thought I did that. Whatever. ¯\_(ツ)_/¯ Thanks

> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:202)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
>   ... 35 more
> Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after 
> retry of 

[jira] [Commented] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890993#comment-15890993
 ] 

Samarth Jain commented on PHOENIX-3702:
---

How about mvn -Dtest=RenewLeaseIT test


> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:202)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
>   ... 35 more
> Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after 
> retry of OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> 

[jira] [Commented] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890986#comment-15890986
 ] 

Andrew Purtell commented on PHOENIX-3702:
-

[~samarthjain] Have you ever tried to run just one IT test? I am trying

{code}
mvn clean install -DskipTests -Dhbase.version=$version && 
mvn verify -Dit.test=RenewLeaseIT -Dhbase.version=$version
{code}

Also tried {{-Dtest=RenewLeastIT}}

Not working as expected, all units are run. You guys have set up something 
fairly extensive in your POM for running failsafe targets. Perhaps something in 
there is the issue. Maybe you have something that works for you?

> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc 

[jira] [Commented] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890954#comment-15890954
 ] 

Andrew Purtell commented on PHOENIX-3702:
-

I'll bisect over on HBASE-17714 to find the commit that broke this

> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:202)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
>   ... 35 more
> Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after 
> retry of OutOfOrderScannerNextException: was there a 

[jira] [Updated] (PHOENIX-3585) MutableIndexIT testSplitDuringIndexScan and testIndexHalfStoreFileReader fail for transactional tables and local indexes

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3585:
--
Priority: Blocker  (was: Major)

> MutableIndexIT testSplitDuringIndexScan and testIndexHalfStoreFileReader fail 
> for transactional tables and local indexes
> 
>
> Key: PHOENIX-3585
> URL: https://issues.apache.org/jira/browse/PHOENIX-3585
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: diff.patch
>
>
> the tests fail if we use HDFSTransactionStateStorage instead of  
> InMemoryTransactionStateStorage when we create the TransactionManager in 
> BaseTest



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3585) MutableIndexIT testSplitDuringIndexScan and testIndexHalfStoreFileReader fail for transactional tables and local indexes

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3585:
--
Fix Version/s: 4.10.0

> MutableIndexIT testSplitDuringIndexScan and testIndexHalfStoreFileReader fail 
> for transactional tables and local indexes
> 
>
> Key: PHOENIX-3585
> URL: https://issues.apache.org/jira/browse/PHOENIX-3585
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.10.0
>
> Attachments: diff.patch
>
>
> the tests fail if we use HDFSTransactionStateStorage instead of  
> InMemoryTransactionStateStorage when we create the TransactionManager in 
> BaseTest



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3680) Do not issue delete markers when dropping a column from an immutable encoded table

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3680:
--
Priority: Blocker  (was: Major)

> Do not issue delete markers when dropping a column from an immutable encoded 
> table
> --
>
> Key: PHOENIX-3680
> URL: https://issues.apache.org/jira/browse/PHOENIX-3680
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>Priority: Blocker
> Fix For: 4.10.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3680) Do not issue delete markers when dropping a column from an immutable encoded table

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3680:
--
Fix Version/s: 4.10.0

> Do not issue delete markers when dropping a column from an immutable encoded 
> table
> --
>
> Key: PHOENIX-3680
> URL: https://issues.apache.org/jira/browse/PHOENIX-3680
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.10.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3685) Extra DeleteFamily marker in non tx index table when setting covered column to null

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3685:
--
Fix Version/s: 4.10.0

> Extra DeleteFamily marker in non tx index table when setting covered column 
> to null
> ---
>
> Key: PHOENIX-3685
> URL: https://issues.apache.org/jira/browse/PHOENIX-3685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3685-test.patch
>
>
> Based on some testing (see patch), I noticed a mysterious DeleteFamily marker 
> when a covered column is set to null. This could potentially delete an actual 
> row with that row key, so it's bad.
> Here's a raw scan dump taken after the MutableIndexIT.testCoveredColumns() 
> test:
> {code}
>  dumping IDX_T02;hconnection-0x211e75ea **
> \x00a/0:/1487356752097/DeleteFamily/vlen=0/seqid=0 value = 
> x\x00a/0:0:V2/1487356752231/Put/vlen=1/seqid=0 value = 4
> x\x00a/0:0:V2/1487356752225/Put/vlen=1/seqid=0 value = 4
> x\x00a/0:0:V2/1487356752202/Put/vlen=1/seqid=0 value = 3
> x\x00a/0:0:V2/1487356752149/DeleteColumn/vlen=0/seqid=0 value = 
> x\x00a/0:0:V2/1487356752097/Put/vlen=1/seqid=0 value = 1
> x\x00a/0:_0/1487356752231/Put/vlen=2/seqid=0 value = _0
> x\x00a/0:_0/1487356752225/Put/vlen=2/seqid=0 value = _0
> x\x00a/0:_0/1487356752202/Put/vlen=2/seqid=0 value = _0
> x\x00a/0:_0/1487356752149/Put/vlen=2/seqid=0 value = _0
> x\x00a/0:_0/1487356752097/Put/vlen=2/seqid=0 value = _0
> ---
> {code}
> That first DeleteFamily marker shouldn't be there. This occurs for both 
> global and local indexes, but not for transactional tables. A further 
> optimization would be not to issue the first Put since the value behind it is 
> the same.
> On the plus side, we're not issuing DeleteFamily markers when only the 
> covered column is being set which is good.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3685) Extra DeleteFamily marker in non tx index table when setting covered column to null

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3685:
--
Priority: Blocker  (was: Major)

> Extra DeleteFamily marker in non tx index table when setting covered column 
> to null
> ---
>
> Key: PHOENIX-3685
> URL: https://issues.apache.org/jira/browse/PHOENIX-3685
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3685-test.patch
>
>
> Based on some testing (see patch), I noticed a mysterious DeleteFamily marker 
> when a covered column is set to null. This could potentially delete an actual 
> row with that row key, so it's bad.
> Here's a raw scan dump taken after the MutableIndexIT.testCoveredColumns() 
> test:
> {code}
>  dumping IDX_T02;hconnection-0x211e75ea **
> \x00a/0:/1487356752097/DeleteFamily/vlen=0/seqid=0 value = 
> x\x00a/0:0:V2/1487356752231/Put/vlen=1/seqid=0 value = 4
> x\x00a/0:0:V2/1487356752225/Put/vlen=1/seqid=0 value = 4
> x\x00a/0:0:V2/1487356752202/Put/vlen=1/seqid=0 value = 3
> x\x00a/0:0:V2/1487356752149/DeleteColumn/vlen=0/seqid=0 value = 
> x\x00a/0:0:V2/1487356752097/Put/vlen=1/seqid=0 value = 1
> x\x00a/0:_0/1487356752231/Put/vlen=2/seqid=0 value = _0
> x\x00a/0:_0/1487356752225/Put/vlen=2/seqid=0 value = _0
> x\x00a/0:_0/1487356752202/Put/vlen=2/seqid=0 value = _0
> x\x00a/0:_0/1487356752149/Put/vlen=2/seqid=0 value = _0
> x\x00a/0:_0/1487356752097/Put/vlen=2/seqid=0 value = _0
> ---
> {code}
> That first DeleteFamily marker shouldn't be there. This occurs for both 
> global and local indexes, but not for transactional tables. A further 
> optimization would be not to issue the first Put since the value behind it is 
> the same.
> On the plus side, we're not issuing DeleteFamily markers when only the 
> covered column is being set which is good.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3702:
--
Fix Version/s: 4.10.0

> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:202)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
>   ... 35 more
> Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after 
> retry of OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> 

[jira] [Updated] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3702:
--
Priority: Blocker  (was: Major)

> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:202)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
>   ... 35 more
> Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after 
> retry of OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   

[jira] [Assigned] (PHOENIX-3702) RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master branches

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3702:
-

Assignee: Samarth Jain

> RenewLeaseIT#testLeaseDoesNotTimeout failing on 4.x-HBase-1.1 and master 
> branches
> -
>
> Key: PHOENIX-3702
> URL: https://issues.apache.org/jira/browse/PHOENIX-3702
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Blocker
> Fix For: 4.10.0
>
>
> Failure stacktrace:
> {code}
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
>   at 
> org.apache.phoenix.end2end.RenewLeaseIT.testLeaseDoesNotTimeout(RenewLeaseIT.java:68)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: Failed after retry of 
> OutOfOrderScannerNextException: was there a rpc timeout?
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:202)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
>   ... 35 more
> Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after 
> retry of OutOfOrderScannerNextException: was there a rpc timeout?
>   at 
> 

[jira] [Updated] (PHOENIX-3346) Hive PhoenixStorageHandler doesn't work well with column mapping

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3346:
--
Priority: Blocker  (was: Major)

> Hive PhoenixStorageHandler doesn't work well with column mapping
> 
>
> Key: PHOENIX-3346
> URL: https://issues.apache.org/jira/browse/PHOENIX-3346
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3346-1.patch
>
>
> If column mapping is used during table creation, the hive table becomes  
> unusable and throws UnknownColumn exception.
> There are several issues in the current implementation:
> 1. During table creation mapping doesn't applies to primary keys
> 2. During select query building no mapping happen
> 3. PhoenixRow should have backward mapping from phoenix column names to hive 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3346) Hive PhoenixStorageHandler doesn't work well with column mapping

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3346:
--
Fix Version/s: 4.10.0

> Hive PhoenixStorageHandler doesn't work well with column mapping
> 
>
> Key: PHOENIX-3346
> URL: https://issues.apache.org/jira/browse/PHOENIX-3346
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
>  Labels: HivePhoenix
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3346-1.patch
>
>
> If column mapping is used during table creation, the hive table becomes  
> unusable and throws UnknownColumn exception.
> There are several issues in the current implementation:
> 1. During table creation mapping doesn't applies to primary keys
> 2. During select query building no mapping happen
> 3. PhoenixRow should have backward mapping from phoenix column names to hive 
> names.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890854#comment-15890854
 ] 

James Taylor edited comment on PHOENIX-3705 at 3/1/17 7:23 PM:
---

Very nice, [~comnetwork]. It seems like this bug could affect the final query 
result, perhaps skipping rows it shouldn't depending on the data, no? Anytime 
the seek next hint is wrong, I think this could be the case.

Small question on whether perhaps one more change is required. I noticed you 
used {{ScanUtil.getRowKeyPosition(slotSpan, j + 1)}} below in the call to 
reposition, but further down (in existing code), we use 
{{ScanUtil.getRowKeyPosition(slotSpan, j)+1}}. Should the latter be changed as 
you've coded? Would be good to have a test case that uses Row Value Constructor 
(i.e. has a slot span) to confirm:
{code}
schema.reposition(
ptr,
ScanUtil.getRowKeyPosition(slotSpan, i),
ScanUtil.getRowKeyPosition(slotSpan, j + 1),
minOffset,
maxOffset,
slotSpan[j + 1]);
int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
// From here on, we use startKey as our buffer (resetting 
minOffset and maxOffset)
// We've copied the part of the current key above that we 
need into startKey
// Reinitialize the iterator to be positioned at previous 
slot position
minOffset = 0;
maxOffset = startKeyLength;
schema.iterator(startKey, minOffset, maxOffset, ptr, 
ScanUtil.getRowKeyPosition(slotSpan, j)+1);
{code}

One more small request: would you mind adding a code comment before your 
reposition call explaining why that's necessary?


was (Author: jamestaylor):
Very nice, [~comnetwork]. It seems like this bug could affect the final query 
result, perhaps skipping rows it shouldn't depending on the data, no? Anytime 
the seek next hint is wrong, I think this could be the case.

Small question on whether perhaps one more change is required. I noticed you 
used {{ScanUtil.getRowKeyPosition(slotSpan, j + 1)}} below in the call to 
reposition, but further down (in existing code), we use 
{{ScanUtil.getRowKeyPosition(slotSpan, j)+1}}. Should the latter be changed as 
you've coded? Would be good to have a test case that uses Row Value Constructor 
(i.e. has a slot span) to confirm:
{code}
schema.reposition(
ptr,
ScanUtil.getRowKeyPosition(slotSpan, i),
ScanUtil.getRowKeyPosition(slotSpan, j + 1),
minOffset,
maxOffset,
slotSpan[j + 1]);
int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
// From here on, we use startKey as our buffer (resetting 
minOffset and maxOffset)
// We've copied the part of the current key above that we 
need into startKey
// Reinitialize the iterator to be positioned at previous 
slot position
minOffset = 0;
maxOffset = startKeyLength;
schema.iterator(startKey, minOffset, maxOffset, ptr, 
ScanUtil.getRowKeyPosition(slotSpan, j)+1);
{code}

> SkipScanFilter may repeatedly copy rowKey Columns to startKey
> -
>
> Key: PHOENIX-3705
> URL: https://issues.apache.org/jira/browse/PHOENIX-3705
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: chenglei
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3705_v1.patch
>
>
> See following simple unit test first,the rowKey is composed of three PInteger 
> columns,and the slots of SkipScanFilter are:
> [ [[1 - 4]], [5, 7], [[9 - 10]] ],
> When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose 
> rowKey is [2,7,11], obviously SkipScanFilter.filterKeyValue
> returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
> returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
> returns  [2,8,5,9] , a very strange value, the unit tests failed.
> {code} 
> @Test
> public void testNavigate() {
> RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
> for(int i=0;i<3;i++) {
> builder.addField(
> new PDatum() {
> @Override
> public boolean isNullable() {
> return false;
> }
> @Override
> 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3705:
--
Fix Version/s: 4.10.0

> SkipScanFilter may repeatedly copy rowKey Columns to startKey
> -
>
> Key: PHOENIX-3705
> URL: https://issues.apache.org/jira/browse/PHOENIX-3705
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: chenglei
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3705_v1.patch
>
>
> See following simple unit test first,the rowKey is composed of three PInteger 
> columns,and the slots of SkipScanFilter are:
> [ [[1 - 4]], [5, 7], [[9 - 10]] ],
> When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose 
> rowKey is [2,7,11], obviously SkipScanFilter.filterKeyValue
> returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
> returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
> returns  [2,8,5,9] , a very strange value, the unit tests failed.
> {code} 
> @Test
> public void testNavigate() {
> RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
> for(int i=0;i<3;i++) {
> builder.addField(
> new PDatum() {
> @Override
> public boolean isNullable() {
> return false;
> }
> @Override
> public PDataType getDataType() {
> return PInteger.INSTANCE;
> }
> @Override
> public Integer getMaxLength() {
> return PInteger.INSTANCE.getMaxLength(null);
> }
> @Override
> public Integer getScale() {
> return PInteger.INSTANCE.getScale(null);
> }
> @Override
> public SortOrder getSortOrder() {
> return SortOrder.getDefault();
> }
> }, false, SortOrder.getDefault());
> }
> 
> List rowKeyColumnRangesList=Arrays.asList(  
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), true, 
> PInteger.INSTANCE.toBytes(4), true)),
> Arrays.asList(
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), true, 
> PInteger.INSTANCE.toBytes(10), true))
> );
> 
> SkipScanFilter skipScanFilter=new 
> SkipScanFilter(rowKeyColumnRangesList, builder.build());
> 
> System.out.println(skipScanFilter);
> 
> byte[] rowKey=ByteUtil.concat(
> PInteger.INSTANCE.toBytes(2), 
> PInteger.INSTANCE.toBytes(7),
> PInteger.INSTANCE.toBytes(11));
> KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
> ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
> assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
> Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
> 
> assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
> "\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
> }
> {code}
> Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
> SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
> second column 7, which match the second range [7] of SkipScanFilter's second 
> slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
> bigger than the third slot range [9 - 10],so position[2] is 0 and the 
> {{SkipScanFilter.ptr}} which points to current column still stay on the third 
> column. Now we begin to backtrack to second column, because the second range 
> [7] of SkipScanFilter's second slot is singleKey and there is no more 
> range,so position[1] is 0 and we continue to backtrack to first column, 
> because the first slot range [1-4] is not singleKey so we stop backtracking 
> at first column.
> Now the problem comes, in following line 448 of {{SkipScanFilter.navigate}} 
> method,{{SkipScanFilter.setStartKey}} method is invoked,first copy rowKey 
> columns before {{SkipScanFilter.ptr}} to {{SkipScanFilter.startKey}}, because 
> now {{SkipScanFilter.ptr}} still point to the third column, so copy the first 
> and second columns to 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3705:
--
Priority: Blocker  (was: Major)

> SkipScanFilter may repeatedly copy rowKey Columns to startKey
> -
>
> Key: PHOENIX-3705
> URL: https://issues.apache.org/jira/browse/PHOENIX-3705
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: chenglei
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3705_v1.patch
>
>
> See following simple unit test first,the rowKey is composed of three PInteger 
> columns,and the slots of SkipScanFilter are:
> [ [[1 - 4]], [5, 7], [[9 - 10]] ],
> When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose 
> rowKey is [2,7,11], obviously SkipScanFilter.filterKeyValue
> returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
> returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
> returns  [2,8,5,9] , a very strange value, the unit tests failed.
> {code} 
> @Test
> public void testNavigate() {
> RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
> for(int i=0;i<3;i++) {
> builder.addField(
> new PDatum() {
> @Override
> public boolean isNullable() {
> return false;
> }
> @Override
> public PDataType getDataType() {
> return PInteger.INSTANCE;
> }
> @Override
> public Integer getMaxLength() {
> return PInteger.INSTANCE.getMaxLength(null);
> }
> @Override
> public Integer getScale() {
> return PInteger.INSTANCE.getScale(null);
> }
> @Override
> public SortOrder getSortOrder() {
> return SortOrder.getDefault();
> }
> }, false, SortOrder.getDefault());
> }
> 
> List rowKeyColumnRangesList=Arrays.asList(  
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), true, 
> PInteger.INSTANCE.toBytes(4), true)),
> Arrays.asList(
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), true, 
> PInteger.INSTANCE.toBytes(10), true))
> );
> 
> SkipScanFilter skipScanFilter=new 
> SkipScanFilter(rowKeyColumnRangesList, builder.build());
> 
> System.out.println(skipScanFilter);
> 
> byte[] rowKey=ByteUtil.concat(
> PInteger.INSTANCE.toBytes(2), 
> PInteger.INSTANCE.toBytes(7),
> PInteger.INSTANCE.toBytes(11));
> KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
> ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
> assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
> Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
> 
> assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
> "\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
> }
> {code}
> Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
> SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
> second column 7, which match the second range [7] of SkipScanFilter's second 
> slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
> bigger than the third slot range [9 - 10],so position[2] is 0 and the 
> {{SkipScanFilter.ptr}} which points to current column still stay on the third 
> column. Now we begin to backtrack to second column, because the second range 
> [7] of SkipScanFilter's second slot is singleKey and there is no more 
> range,so position[1] is 0 and we continue to backtrack to first column, 
> because the first slot range [1-4] is not singleKey so we stop backtracking 
> at first column.
> Now the problem comes, in following line 448 of {{SkipScanFilter.navigate}} 
> method,{{SkipScanFilter.setStartKey}} method is invoked,first copy rowKey 
> columns before {{SkipScanFilter.ptr}} to {{SkipScanFilter.startKey}}, because 
> now {{SkipScanFilter.ptr}} still point to the third column, so copy the first 
> and 

[jira] [Commented] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890854#comment-15890854
 ] 

James Taylor commented on PHOENIX-3705:
---

Very nice, [~comnetwork]. It seems like this bug could affect the final query 
result, perhaps skipping rows it shouldn't depending on the data, no? Anytime 
the seek next hint is wrong, I think this could be the case.

Small question on whether perhaps one more change is required. I noticed you 
used {{ScanUtil.getRowKeyPosition(slotSpan, j + 1)}} below in the call to 
reposition, but further down (in existing code), we use 
{{ScanUtil.getRowKeyPosition(slotSpan, j)+1}}. Should the latter be changed as 
you've coded? Would be good to have a test case that uses Row Value Constructor 
(i.e. has a slot span) to confirm:
{code}
schema.reposition(
ptr,
ScanUtil.getRowKeyPosition(slotSpan, i),
ScanUtil.getRowKeyPosition(slotSpan, j + 1),
minOffset,
maxOffset,
slotSpan[j + 1]);
int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
// From here on, we use startKey as our buffer (resetting 
minOffset and maxOffset)
// We've copied the part of the current key above that we 
need into startKey
// Reinitialize the iterator to be positioned at previous 
slot position
minOffset = 0;
maxOffset = startKeyLength;
schema.iterator(startKey, minOffset, maxOffset, ptr, 
ScanUtil.getRowKeyPosition(slotSpan, j)+1);
{code}

> SkipScanFilter may repeatedly copy rowKey Columns to startKey
> -
>
> Key: PHOENIX-3705
> URL: https://issues.apache.org/jira/browse/PHOENIX-3705
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: chenglei
> Attachments: PHOENIX-3705_v1.patch
>
>
> See following simple unit test first,the rowKey is composed of three PInteger 
> columns,and the slots of SkipScanFilter are:
> [ [[1 - 4]], [5, 7], [[9 - 10]] ],
> When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose 
> rowKey is [2,7,11], obviously SkipScanFilter.filterKeyValue
> returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
> returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
> returns  [2,8,5,9] , a very strange value, the unit tests failed.
> {code} 
> @Test
> public void testNavigate() {
> RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
> for(int i=0;i<3;i++) {
> builder.addField(
> new PDatum() {
> @Override
> public boolean isNullable() {
> return false;
> }
> @Override
> public PDataType getDataType() {
> return PInteger.INSTANCE;
> }
> @Override
> public Integer getMaxLength() {
> return PInteger.INSTANCE.getMaxLength(null);
> }
> @Override
> public Integer getScale() {
> return PInteger.INSTANCE.getScale(null);
> }
> @Override
> public SortOrder getSortOrder() {
> return SortOrder.getDefault();
> }
> }, false, SortOrder.getDefault());
> }
> 
> List rowKeyColumnRangesList=Arrays.asList(  
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), true, 
> PInteger.INSTANCE.toBytes(4), true)),
> Arrays.asList(
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), true, 
> PInteger.INSTANCE.toBytes(10), true))
> );
> 
> SkipScanFilter skipScanFilter=new 
> SkipScanFilter(rowKeyColumnRangesList, builder.build());
> 
> System.out.println(skipScanFilter);
> 
> byte[] rowKey=ByteUtil.concat(
> PInteger.INSTANCE.toBytes(2), 
> PInteger.INSTANCE.toBytes(7),
> PInteger.INSTANCE.toBytes(11));
> KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
> 

[jira] [Commented] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890793#comment-15890793
 ] 

James Taylor commented on PHOENIX-3649:
---

We set the cell timestamp in MutationState (based on return of 
MutationState.validate()) so that all of the mutations for an UPSERT SELECT 
have a consistent timestamp. Since the server-side execution is bypassing 
MutationState, we're skipping that (and for the same reason, you're right, we 
can't run it server side when an immutable table has indexes).

There's code in MetaDataClient.buildIndex() that attempts to handle this case 
of an UPSERT SELECT having started but not yet completed when a CREATE INDEX is 
executed (i.e. the statements are overlapping). The code executes a second pass 
to pick up any data table rows that may have been in the process of being 
created *before* the index was created (so that command would not know of the 
index, hence the incremental maintenance would not have been done). This second 
pass is time bounded by 1) the start of the index build minus some "play" until 
2) the start of the index build. If the server-side runs the UPSERT SELECT with 
the latest time stamp, this second pass won't pick up the rows. This isn't a 
perfect solution, but it's the best we could come up with.

I think short term, the easiest fix would be to use 
StatementContext.getCurrentTime() to get the time stamp at which the statement 
was compiled and pass this through to the server-side. This will fix 
ImmutableIndexIT#testCreateIndexDuringUpsertSelect (for mutable and immutable 
tables). 

Longer term, it'd be good to go through the MutationState API on the 
server-side so we can execute an UPSERT SELECT on an immutable table with 
indexes. Perhaps we can send over the PTable of the target table from the 
client?

For PHOENIX-3583, I think we should give it more thought and target any changes 
for 4.11.

> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3654) Load Balancer for thin client

2017-03-01 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890655#comment-15890655
 ] 

Andrew Purtell commented on PHOENIX-3654:
-

bq. "If multiple cluster is using same zookeeper ensemble, then the security 
would be based on the parent cluster name as present in hbase-site.xml."

We should expand on this. What security concerns and how. Protecting the ZK 
znodes with ACLs? Providing configuration on what credentials to use for 
cluster access?

bq. "The PQS will create an ephemeral node under the a parent node and register 
itself. [...] PQS will also keep updating it’s znode with the number of 
connection it is handling. The update could be done via creating a child node 
within its structure"

I believe ephemerals aren't allowed to have children, at least not until 
whenever container znodes are offered in a GA version of ZooKeeper. You can 
write and update a structure as the data of ephemeral znode though, probably 
you'll opt for JSON encoding.

bq. " It add watcher so that any change to the ephemeral node will also modify 
the cached data"

You will need a watch on the parent of the ephemerals to catch any changes in 
membership. You will need a watch on each ephemeral to catch a change in data. 



> Load Balancer for thin client
> -
>
> Key: PHOENIX-3654
> URL: https://issues.apache.org/jira/browse/PHOENIX-3654
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
> Fix For: 4.9.0
>
> Attachments: LoadBalancerDesign.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> We have been having internal discussion on load balancer for thin client for 
> PQS. The general consensus we have is to have an embedded load balancer with 
> the thin client instead of using external load balancer such as haproxy. The 
> idea is to not to have another layer between client and PQS. This reduces 
> operational cost for system, which currently leads to delay in executing 
> projects.
> But this also comes with challenge of having an embedded load balancer which 
> can maintain sticky sessions, do fair load balancing knowing the load 
> downstream of PQS server. In addition, load balancer needs to know location 
> of multiple PQS server. Now, the thin client needs to keep track of PQS 
> servers via zookeeper ( or other means). 
> In the new design, the client ( PQS client) , it is proposed to  have an 
> embedded load balancer.
> Where will the load Balancer sit ?
> The load load balancer will embedded within the app server client.  
> How will the load balancer work ? 
> Load balancer will contact zookeeper to get location of PQS. In this case, 
> PQS needs to register to ZK itself once it comes online. Zookeeper location 
> is in hbase-site.xml. It will maintain a small cache of connection to the 
> PQS. When a request comes in, it will check for an open connection from the 
> cache. 
> How will load balancer know load on PQS ?
> To start with, it will pick a random open connection to PQS. This means that 
> load balancer does not know PQS load. Later , we can augment the code so that 
> thin client can receive load info from PQS and make intelligent decisions.  
> How will load balancer maintain sticky sessions ?
> While we still need to investigate how to implement sticky sessions. We can 
> look for some open source implementation for the same.
> How will PQS register itself to service locator ?
> PQS will have location of zookeeper in hbase-site.xml and it would register 
> itself to the zookeeper. Thin client will find out PQS location using 
> zookeeper.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3689) Not determinist order by with limit

2017-03-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890621#comment-15890621
 ] 

James Taylor commented on PHOENIX-3689:
---

What version of Phoenix and HBase are you using? If not 4.9.0, would you mind 
trying that to see if the issue persists? Can you also include the explain plan 
by doing this:
{code}
explain select dt from TT group by dt order by dt  desc limit 1;
{code}
If you can repro through a standalone test case, that would be ideal. If you 
can repro it, it'd definitely be a serious bug.

> Not determinist order by with limit
> ---
>
> Key: PHOENIX-3689
> URL: https://issues.apache.org/jira/browse/PHOENIX-3689
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Arthur
>
> The following request does not return the last value of table TT:
> select * from TT order by dt desc limit 1;
> Adding a 'group by dt' clause gets back the good result.
> I noticed that an order by with 'limit 10' returns a merge of 10 results from 
> each region and not 10 results of the whole request.
> So 'order by' is not determinist. It is a bug or a feature ?
> Here is my DDL:
> {code}
> CREATE TABLE TT (dt timestamp NOT NULL, message bigint NOT NULL, id 
> varchar(20) NOT NULL, version varchar CONSTRAINT PK PRIMARY KEY (dt, message, 
> id));
> {code}
> The issue occurs with a lot of data. I think the 'order by' clause is done by 
> region and not for the whole result, so limit 1 returns the first region that 
> answers and phoenix cache it. With only one region, this does not occur.
> This script generate enough data to throw the issue:
> {code}
> #!/usr/bin/python
> import string
> from datetime import datetime, timedelta
> dt = datetime(2017, 1, 1, 3)
> with open('data.csv', 'w') as file:
> for i in range(0, 1000):
> newdt = dt + timedelta(microseconds=i*1)
> file.write("{};{};{};\n".format(datetime.strftime(newdt, 
> "%Y-%m-%d %H:%M:%S.%f"), 91 if i  % 10  == 0 else 100, str(i).zfill(20)))
> {code}
> With this data set, the last data is : 2017-01-02 06:46:39.99
> Result with order by clause is not the last value:
> {noformat}
> select dt from TT order by dt desc limit 1;
> +--+
> |DT|
> +--+
> | 2017-01-01 07:54:40.730  |
> {noformat}
> Correct result is given when using group by, but I need to get all columns.
> {noformat}
> select dt from TT group by dt order by dt  desc limit 1;
> +--+
> |DT|
> +--+
> | 2017-01-02 06:46:39.990  |
> +--+
> {noformat}
> I use a subquery as a workaroud, but performance are not good.
> {noformat}
> select * from TT where dt = ANY(select dt from TT group by dt order by dt 
> desc limit 1);
> +--+--+---+--+
> |DT| MESSAGE  |  ID   | VERSION  |
> +--+--+---+--+
> | 2017-01-02 06:46:39.990  | 100  | 0999  |  |
> +--+--+---+--+
> 1 row selected (8.393 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3689) Not determinist order by with limit

2017-03-01 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3689:
--
Description: 
The following request does not return the last value of table TT:
select * from TT order by dt desc limit 1;
Adding a 'group by dt' clause gets back the good result.

I noticed that an order by with 'limit 10' returns a merge of 10 results from 
each region and not 10 results of the whole request.

So 'order by' is not determinist. It is a bug or a feature ?

Here is my DDL:
{code}
CREATE TABLE TT (dt timestamp NOT NULL, message bigint NOT NULL, id varchar(20) 
NOT NULL, version varchar CONSTRAINT PK PRIMARY KEY (dt, message, id));
{code}

The issue occurs with a lot of data. I think the 'order by' clause is done by 
region and not for the whole result, so limit 1 returns the first region that 
answers and phoenix cache it. With only one region, this does not occur.

This script generate enough data to throw the issue:
{code}
#!/usr/bin/python

import string
from datetime import datetime, timedelta

dt = datetime(2017, 1, 1, 3)
with open('data.csv', 'w') as file:
for i in range(0, 1000):
newdt = dt + timedelta(microseconds=i*1)
file.write("{};{};{};\n".format(datetime.strftime(newdt, 
"%Y-%m-%d %H:%M:%S.%f"), 91 if i  % 10  == 0 else 100, str(i).zfill(20)))
{code}

With this data set, the last data is : 2017-01-02 06:46:39.99

Result with order by clause is not the last value:
{noformat}
select dt from TT order by dt desc limit 1;
+--+
|DT|
+--+
| 2017-01-01 07:54:40.730  |
{noformat}

Correct result is given when using group by, but I need to get all columns.
{noformat}
select dt from TT group by dt order by dt  desc limit 1;
+--+
|DT|
+--+
| 2017-01-02 06:46:39.990  |
+--+
{noformat}

I use a subquery as a workaroud, but performance are not good.
{noformat}
select * from TT where dt = ANY(select dt from TT group by dt order by dt desc 
limit 1);
+--+--+---+--+
|DT| MESSAGE  |  ID   | VERSION  |
+--+--+---+--+
| 2017-01-02 06:46:39.990  | 100  | 0999  |  |
+--+--+---+--+
1 row selected (8.393 seconds)
{noformat}

  was:
The following request does not return the last value of table TT:
select * from TT order by dt desc limit 1;
Adding a 'group by dt' clause gets back the good result.

I noticed that an order by with 'limit 10' returns a merge of 10 results from 
each region and not 10 results of the whole request.

So 'order by' is not determinist. It is a bug or a feature ?

Here is my DDL:
{noformat}
CREATE TABLE TT (dt timestamp NOT NULL, message bigint NOT NULL, id varchar(20) 
NOT NULL, version varchar CONSTRAINT PK PRIMARY KEY (dt, message, id));
{noformat}

The issue occurs with a lot of data. I think the 'order by' clause is done by 
region and not for the whole result, so limit 1 returns the first region that 
answers and phoenix cache it. With only one region, this does not occur.

This script generate enough data to throw the issue:
{code}
#!/usr/bin/python

import string
from datetime import datetime, timedelta

dt = datetime(2017, 1, 1, 3)
with open('data.csv', 'w') as file:
for i in range(0, 1000):
newdt = dt + timedelta(microseconds=i*1)
file.write("{};{};{};\n".format(datetime.strftime(newdt, 
"%Y-%m-%d %H:%M:%S.%f"), 91 if i  % 10  == 0 else 100, str(i).zfill(20)))
{code}

With this data set, the last data is : 2017-01-02 06:46:39.99

Result with order by clause is not the last value:
{noformat}
select dt from TT order by dt desc limit 1;
+--+
|DT|
+--+
| 2017-01-01 07:54:40.730  |
{noformat}

Correct result is given when using group by, but I need to get all columns.
{noformat}
select dt from TT group by dt order by dt  desc limit 1;
+--+
|DT|
+--+
| 2017-01-02 06:46:39.990  |
+--+
{noformat}

I use a subquery as a workaroud, but performance are not good.
{noformat}
select * from TT where dt = ANY(select dt from TT group by dt order by dt desc 
limit 1);
+--+--+---+--+
|DT| MESSAGE  |  ID   | VERSION  |
+--+--+---+--+
| 2017-01-02 06:46:39.990  | 100  | 0999  |  |

Re: [ANNOUNCE] New Apache Phoenix committer - Geoffrey Jacoby

2017-03-01 Thread Andrew Purtell
Congratulations and welcome Geoffrey!

On Tue, Feb 28, 2017 at 12:07 PM, James Taylor 
wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Geoffrey
> Jacoby has accepted our invitation to become a committer on the Apache
> Phoenix project. He's been involved with Phoenix for two years and his list
> of fixed issues and enhancements is impressive [1], including in particular
> allowing our MR integration to write to a different target cluster [2],
> having the batch size of commits be byte-based instead of row-based [3],
> enabling replication to only occur for multi-tenant views in the system
> catalog [4], and putting resource controls in place to prevent too many
> simultaneous connections [5].
>
> Welcome aboard, Geoffrey. Looking forward to many more contributions!
>
> Regards,
> James
>
> [1]
> https://issues.apache.org/jira/issues/?jql=project%20%
> 3D%20PHOENIX%20and%20assignee%3Dgjacoby
> [2] https://issues.apache.org/jira/browse/PHOENIX-1653
> [3] https://issues.apache.org/jira/browse/PHOENIX-541
> [4] https://issues.apache.org/jira/browse/PHOENIX-3639
> [5] https://issues.apache.org/jira/browse/PHOENIX-3663
>



-- 
Best regards,

   - Andy

If you are given a choice, you believe you have acted freely. - Raymond
Teller (via Peter Watts)


[jira] [Commented] (PHOENIX-3539) Fix bulkload for StorageScheme - ONE_CELL_PER_KEYVALUE_COLUMN

2017-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890128#comment-15890128
 ] 

Hudson commented on PHOENIX-3539:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1571 (See 
[https://builds.apache.org/job/Phoenix-master/1571/])
PHOENIX-3539 Fix bulkload for StorageScheme - (samarth: rev 
5f5662b24dad478c9cb0917f20e2af9e6a539266)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToBytesWritableMapper.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
* (edit) 
phoenix-flume/src/main/java/org/apache/phoenix/flume/serializer/CsvEventSerializer.java


> Fix bulkload for StorageScheme - ONE_CELL_PER_KEYVALUE_COLUMN 
> --
>
> Key: PHOENIX-3539
> URL: https://issues.apache.org/jira/browse/PHOENIX-3539
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.10.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3539.patch, PHOENIX-3539_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3703) Immutable multitenant tables created as non-encoded irrespective of encoding property

2017-03-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15890127#comment-15890127
 ] 

Hudson commented on PHOENIX-3703:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1571 (See 
[https://builds.apache.org/job/Phoenix-master/1571/])
PHOENIX-3703 Immutable multitenant tables created as non-encoded (samarth: rev 
c387260cd87dc931f418e9cf35bf0d29d5cd8b7e)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateTableIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> Immutable multitenant tables created as non-encoded irrespective of encoding 
> property
> -
>
> Key: PHOENIX-3703
> URL: https://issues.apache.org/jira/browse/PHOENIX-3703
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3703.patch, PHOENIX-3703_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3705:
--
Description: 
See following simple unit test first,the rowKey is composed of three PInteger 
columns,and the slots of SkipScanFilter are:
[ [[1 - 4]], [5, 7], [[9 - 10]] ],
When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose rowKey 
is [2,7,11], obviously SkipScanFilter.filterKeyValue
returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
returns  [2,8,5,9] , a very strange value, the unit tests failed.
{code} 
@Test
public void testNavigate() {
RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
for(int i=0;i<3;i++) {
builder.addField(
new PDatum() {

@Override
public boolean isNullable() {
return false;
}

@Override
public PDataType getDataType() {
return PInteger.INSTANCE;
}

@Override
public Integer getMaxLength() {
return PInteger.INSTANCE.getMaxLength(null);
}

@Override
public Integer getScale() {
return PInteger.INSTANCE.getScale(null);
}

@Override
public SortOrder getSortOrder() {
return SortOrder.getDefault();
}

}, false, SortOrder.getDefault());
}

List rowKeyColumnRangesList=Arrays.asList(  
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), 
true, PInteger.INSTANCE.toBytes(4), true)),
Arrays.asList(
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), 
true, PInteger.INSTANCE.toBytes(10), true))
);

SkipScanFilter skipScanFilter=new 
SkipScanFilter(rowKeyColumnRangesList, builder.build());

System.out.println(skipScanFilter);

byte[] rowKey=ByteUtil.concat(
PInteger.INSTANCE.toBytes(2), 
PInteger.INSTANCE.toBytes(7),
PInteger.INSTANCE.toBytes(11));
KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
"\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
}
{code}

Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
second column 7, which match the second range [7] of SkipScanFilter's second 
slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
bigger than the third slot range [9 - 10],so position[2] is 0 and the 
{{SkipScanFilter.ptr}} which points to current column still stay on the third 
column. Now we begin to backtrack to second column, because the second range 
[7] of SkipScanFilter's second slot is singleKey and there is no more range,so 
position[1] is 0 and we continue to backtrack to first column, because the 
first slot range [1-4] is not singleKey so we stop backtracking at first column.

Now the problem comes, in following line 448 of {{SkipScanFilter.navigate}} 
method,{{SkipScanFilter.setStartKey}} method is invoked,first copy rowKey 
columns before {{SkipScanFilter.ptr}} to {{SkipScanFilter.startKey}}, because 
now {{SkipScanFilter.ptr}} still point to the third column, so copy the first 
and second columns to {{SkipScanFilter.startKey}},{{SkipScanFilter.startKey}} 
is [2,7]  after this step , then setStartKey method copy the lower bound 
{{SkipScanFilter.slots}} from {{j+1}} column, accoring to 
{{SkipScanFilter.position}} array,because now j is 0, and both position[1] and 
position[2] are 0, so {{SkipScanFilter.startKey}} becomes [2,7,5,9], and in 
following line 457, {{ByteUtil.nextKey}} is invoked on [2,7], [2,7] is 
incremented to [2,8], finally {{SkipScanFilter.startKey}} is [2,8,5,9].

{code} 
448int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
449// From here on, we use startKey as 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3705:
--
Description: 
See following simple unit test first,the rowKey is composed of three PInteger 
columns,and the slots of SkipScanFilter are:
[ [[1 - 4]], [5, 7], [[9 - 10]] ],
When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose rowKey 
is [2,7,11], obviously SkipScanFilter.filterKeyValue
returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
returns  [2,8,5,9] , a very strange value, the unit tests failed.
{code} 
@Test
public void testNavigate() {
RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
for(int i=0;i<3;i++) {
builder.addField(
new PDatum() {

@Override
public boolean isNullable() {
return false;
}

@Override
public PDataType getDataType() {
return PInteger.INSTANCE;
}

@Override
public Integer getMaxLength() {
return PInteger.INSTANCE.getMaxLength(null);
}

@Override
public Integer getScale() {
return PInteger.INSTANCE.getScale(null);
}

@Override
public SortOrder getSortOrder() {
return SortOrder.getDefault();
}

}, false, SortOrder.getDefault());
}

List rowKeyColumnRangesList=Arrays.asList(  
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), 
true, PInteger.INSTANCE.toBytes(4), true)),
Arrays.asList(
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), 
true, PInteger.INSTANCE.toBytes(10), true))
);

SkipScanFilter skipScanFilter=new 
SkipScanFilter(rowKeyColumnRangesList, builder.build());

System.out.println(skipScanFilter);

byte[] rowKey=ByteUtil.concat(
PInteger.INSTANCE.toBytes(2), 
PInteger.INSTANCE.toBytes(7),
PInteger.INSTANCE.toBytes(11));
KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
"\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
}
{code}

Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
second column 7, which match the second range [7] of SkipScanFilter's second 
slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
bigger than the third slot range [9 - 10],so position[2] is 0 and the 
{{SkipScanFilter.ptr}} which points to current column still stay on the third 
column. Now we begin to backtrack to second column, because the second range 
[7] of SkipScanFilter's second slot is singleKey and there is no more range,so 
position[1] is 0 and we continue to backtrack to first column, because the 
first slot range [1-4] is not singleKey so we stop backtracking at first column.

Now the problem comes, in following line 448 of {{SkipScanFilter.navigate}} 
method,{{SkipScanFilter.setStartKey}} method is invoked,first copy rowKey 
columns before {{SkipScanFilter.ptr}} to {{SkipScanFilter.startKey}}, because 
now {{SkipScanFilter.ptr}} still point to the third column, so copy the first 
and second columns to {{SkipScanFilter.startKey}},{{SkipScanFilter.startKey}} 
is [2,7]  after this step , then setStartKey method copy the lower bound 
{{SkipScanFilter.slots}} from {{j+1}} column, accoring to 
{{SkipScanFilter.position}} array,now j is 0, and both position[1] and 
position[2] are 0,so {{SkipScanFilter.startKey}} becomes [2,7,5,9], and in 
following line 457, {{ByteUtil.nextKey}} is invoked on [2,7], [2,7] is 
incremented to [2,8], finally {{SkipScanFilter.startKey}} is [2,8,5,9].

{code} 
448int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
449// From here on, we use startKey as our 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3705:
--
Description: 
See following simple unit test first,the rowKey is composed of three PInteger 
columns,and the slots of SkipScanFilter are:
[ [[1 - 4]], [5, 7], [[9 - 10]] ],
When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose rowKey 
is [2,7,11], obviously SkipScanFilter.filterKeyValue
returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
returns  [2,8,5,9] , a very strange value, the unit tests failed.
{code} 
@Test
public void testNavigate() {
RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
for(int i=0;i<3;i++) {
builder.addField(
new PDatum() {

@Override
public boolean isNullable() {
return false;
}

@Override
public PDataType getDataType() {
return PInteger.INSTANCE;
}

@Override
public Integer getMaxLength() {
return PInteger.INSTANCE.getMaxLength(null);
}

@Override
public Integer getScale() {
return PInteger.INSTANCE.getScale(null);
}

@Override
public SortOrder getSortOrder() {
return SortOrder.getDefault();
}

}, false, SortOrder.getDefault());
}

List rowKeyColumnRangesList=Arrays.asList(  
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), 
true, PInteger.INSTANCE.toBytes(4), true)),
Arrays.asList(
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), 
true, PInteger.INSTANCE.toBytes(10), true))
);

SkipScanFilter skipScanFilter=new 
SkipScanFilter(rowKeyColumnRangesList, builder.build());

System.out.println(skipScanFilter);

byte[] rowKey=ByteUtil.concat(
PInteger.INSTANCE.toBytes(2), 
PInteger.INSTANCE.toBytes(7),
PInteger.INSTANCE.toBytes(11));
KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
"\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
}
{code}

Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
second column 7, which match the second range [7] of SkipScanFilter's second 
slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
bigger than the third slot range [9 - 10],so position[2] is 0 and we begin to 
backtrack to second column, because the second range [7] of SkipScanFilter's 
second slot is singleKey and there is no more range,so position[1] is 0 and we 
continue to backtrack to first column, because the first slot range [1-4] is 
not singleKey so we stop backtracking at first column.

Now the problem comes, in following line 448 of {{SkipScanFilter.navigate}} 
method,{{SkipScanFilter.setStartKey}} method is invoked,first copy rowKey 
columns before {{ptr}} to {{SkipScanFilter.startKey}}, because now ptr still 
point to the third column, so copy the first and second columns to 
{{SkipScanFilter.startKey}},{{SkipScanFilter.startKey}} is [2,7]  after this 
step , then setStartKey method copy the lower bound {{SkipScanFilter.slots}} 
from j+1 column, accoring to {{SkipScanFilter.position}} array,now j is 0, and 
both position[1] and position[2] are 0,so {{SkipScanFilter.startKey}} becomes 
[2,7,5,9], and in following line 457, {{ByteUtil.nextKey}} is invoked on [2,7], 
[2,7] is incremented to [2,8], finally {{SkipScanFilter.startKey}} is [2,8,5,9].

{code} 
448int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
449// From here on, we use startKey as our buffer 
(resetting minOffset and maxOffset)
450// We've copied the part of the current key above that 
we need into 

[jira] [Commented] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889941#comment-15889941
 ] 

chenglei commented on PHOENIX-3705:
---

I uploaded my first patch, please help me have a review.

> SkipScanFilter may repeatedly copy rowKey Columns to startKey
> -
>
> Key: PHOENIX-3705
> URL: https://issues.apache.org/jira/browse/PHOENIX-3705
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: chenglei
> Attachments: PHOENIX-3705_v1.patch
>
>
> See following simple unit test first,the rowKey is composed of three PInteger 
> columns,and the slots of SkipScanFilter are:
> [ [[1 - 4]], [5, 7], [[9 - 10]] ],
> When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose 
> rowKey is [2,7,11], obviously SkipScanFilter.filterKeyValue
> returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
> returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
> returns  [2,8,5,9] , a very strange value, the unit tests failed.
> {code} 
> @Test
> public void testNavigate() {
> RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
> for(int i=0;i<3;i++) {
> builder.addField(
> new PDatum() {
> @Override
> public boolean isNullable() {
> return false;
> }
> @Override
> public PDataType getDataType() {
> return PInteger.INSTANCE;
> }
> @Override
> public Integer getMaxLength() {
> return PInteger.INSTANCE.getMaxLength(null);
> }
> @Override
> public Integer getScale() {
> return PInteger.INSTANCE.getScale(null);
> }
> @Override
> public SortOrder getSortOrder() {
> return SortOrder.getDefault();
> }
> }, false, SortOrder.getDefault());
> }
> 
> List rowKeyColumnRangesList=Arrays.asList(  
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), true, 
> PInteger.INSTANCE.toBytes(4), true)),
> Arrays.asList(
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), true, 
> PInteger.INSTANCE.toBytes(10), true))
> );
> 
> SkipScanFilter skipScanFilter=new 
> SkipScanFilter(rowKeyColumnRangesList, builder.build());
> 
> System.out.println(skipScanFilter);
> 
> byte[] rowKey=ByteUtil.concat(
> PInteger.INSTANCE.toBytes(2), 
> PInteger.INSTANCE.toBytes(7),
> PInteger.INSTANCE.toBytes(11));
> KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
> ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
> assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
> Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
> 
> assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
> "\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
> }
> {code}
> Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
> SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
> second column 7, which match the second range [7] of SkipScanFilter's second 
> slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
> bigger than the third slot range [9 - 10],so position[2] is 0 and we begin to 
> backtrack to second column, because the second range [7] of SkipScanFilter's 
> second slot is singleKey and there is no more range,so position[1] is 0 and 
> we continue to backtrack to first column, because the first slot range [1-4] 
> is not singleKey so we stop backtracking at first column.
> Now the problem comes, in following line 448 of SkipScanFilter.navigate 
> method,SkipScanFilter.setStartKey method is invoked,first copy rowKey columns 
> before ptr to SkipScanFilter.startKey, because now ptr still point to the 
> third column, so copy the first and second columns to 
> SkipScanFilter.startKey,SkipScanFilter.startKey is [2,7]  after this step , 
> then setStartKey method copy the lower bound SkipScanFilter.slots 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3705:
--
Attachment: PHOENIX-3705_v1.patch

> SkipScanFilter may repeatedly copy rowKey Columns to startKey
> -
>
> Key: PHOENIX-3705
> URL: https://issues.apache.org/jira/browse/PHOENIX-3705
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: chenglei
> Attachments: PHOENIX-3705_v1.patch
>
>
> See following simple unit test first,the rowKey is composed of three PInteger 
> columns,and the slots of SkipScanFilter are:
> [ [[1 - 4]], [5, 7], [[9 - 10]] ],
> When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose 
> rowKey is [2,7,11], obviously SkipScanFilter.filterKeyValue
> returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
> returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
> returns  [2,8,5,9] , a very strange value, the unit tests failed.
> {code} 
> @Test
> public void testNavigate() {
> RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
> for(int i=0;i<3;i++) {
> builder.addField(
> new PDatum() {
> @Override
> public boolean isNullable() {
> return false;
> }
> @Override
> public PDataType getDataType() {
> return PInteger.INSTANCE;
> }
> @Override
> public Integer getMaxLength() {
> return PInteger.INSTANCE.getMaxLength(null);
> }
> @Override
> public Integer getScale() {
> return PInteger.INSTANCE.getScale(null);
> }
> @Override
> public SortOrder getSortOrder() {
> return SortOrder.getDefault();
> }
> }, false, SortOrder.getDefault());
> }
> 
> List rowKeyColumnRangesList=Arrays.asList(  
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), true, 
> PInteger.INSTANCE.toBytes(4), true)),
> Arrays.asList(
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
> KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
> Arrays.asList(
> 
> PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), true, 
> PInteger.INSTANCE.toBytes(10), true))
> );
> 
> SkipScanFilter skipScanFilter=new 
> SkipScanFilter(rowKeyColumnRangesList, builder.build());
> 
> System.out.println(skipScanFilter);
> 
> byte[] rowKey=ByteUtil.concat(
> PInteger.INSTANCE.toBytes(2), 
> PInteger.INSTANCE.toBytes(7),
> PInteger.INSTANCE.toBytes(11));
> KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
> ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
> assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
> Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
> 
> assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
> "\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
> }
> {code}
> Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
> SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
> second column 7, which match the second range [7] of SkipScanFilter's second 
> slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
> bigger than the third slot range [9 - 10],so position[2] is 0 and we begin to 
> backtrack to second column, because the second range [7] of SkipScanFilter's 
> second slot is singleKey and there is no more range,so position[1] is 0 and 
> we continue to backtrack to first column, because the first slot range [1-4] 
> is not singleKey so we stop backtracking at first column.
> Now the problem comes, in following line 448 of SkipScanFilter.navigate 
> method,SkipScanFilter.setStartKey method is invoked,first copy rowKey columns 
> before ptr to SkipScanFilter.startKey, because now ptr still point to the 
> third column, so copy the first and second columns to 
> SkipScanFilter.startKey,SkipScanFilter.startKey is [2,7]  after this step , 
> then setStartKey method copy the lower bound SkipScanFilter.slots from j+1 
> column, accoring to SkipScanFilter.position 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3705:
--
Description: 
See following simple unit test first,the rowKey is composed of three PInteger 
columns,and the slots of SkipScanFilter are:
[ [[1 - 4]], [5, 7], [[9 - 10]] ],
When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose rowKey 
is [2,7,11], obviously SkipScanFilter.filterKeyValue
returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
returns  [2,8,5,9] , a very strange value, the unit tests failed.
{code} 
@Test
public void testNavigate() {
RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
for(int i=0;i<3;i++) {
builder.addField(
new PDatum() {

@Override
public boolean isNullable() {
return false;
}

@Override
public PDataType getDataType() {
return PInteger.INSTANCE;
}

@Override
public Integer getMaxLength() {
return PInteger.INSTANCE.getMaxLength(null);
}

@Override
public Integer getScale() {
return PInteger.INSTANCE.getScale(null);
}

@Override
public SortOrder getSortOrder() {
return SortOrder.getDefault();
}

}, false, SortOrder.getDefault());
}

List rowKeyColumnRangesList=Arrays.asList(  
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), 
true, PInteger.INSTANCE.toBytes(4), true)),
Arrays.asList(
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), 
true, PInteger.INSTANCE.toBytes(10), true))
);

SkipScanFilter skipScanFilter=new 
SkipScanFilter(rowKeyColumnRangesList, builder.build());

System.out.println(skipScanFilter);

byte[] rowKey=ByteUtil.concat(
PInteger.INSTANCE.toBytes(2), 
PInteger.INSTANCE.toBytes(7),
PInteger.INSTANCE.toBytes(11));
KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
"\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
}
{code}

Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
second column 7, which match the second range [7] of SkipScanFilter's second 
slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
bigger than the third slot range [9 - 10],so position[2] is 0 and we begin to 
backtrack to second column, because the second range [7] of SkipScanFilter's 
second slot is singleKey and there is no more range,so position[1] is 0 and we 
continue to backtrack to first column, because the first slot range [1-4] is 
not singleKey so we stop backtracking at first column.

Now the problem comes, in following line 448 of SkipScanFilter.navigate 
method,SkipScanFilter.setStartKey method is invoked,first copy rowKey columns 
before ptr to SkipScanFilter.startKey, because now ptr still point to the third 
column, so copy the first and second columns to 
SkipScanFilter.startKey,SkipScanFilter.startKey is [2,7]  after this step , 
then setStartKey method copy the lower bound SkipScanFilter.slots from j+1 
column, accoring to SkipScanFilter.position array,now j is 0, and both 
position[1] and position[2] are 0,so SkipScanFilter.startKey becomes [2,7,5,9], 
and in following line 457, ByteUtil.nextKey is invoked on [2,7], [2,7] is 
incremented to [2,8], finally SkipScanFilter.startKey is [2,8,5,9].

{code} 
448int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
449// From here on, we use startKey as our buffer 
(resetting minOffset and maxOffset)
450// We've copied the part of the current key above that 
we need into startKey
451// 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3705:
--
Description: 
See following simple unit test first,the rowKey is composed of three PInteger 
columns,and the slots of SkipScanFilter are:
[ [[1 - 4]], [5, 7], [[9 - 10]] ],
When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose rowKey 
is [2,7,11], obviously SkipScanFilter.filterKeyValue
returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
returns  [2,8,5,9] , a very strange value, the unit tests failed.
{code} 
@Test
public void testNavigate() {
RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
for(int i=0;i<3;i++) {
builder.addField(
new PDatum() {

@Override
public boolean isNullable() {
return false;
}

@Override
public PDataType getDataType() {
return PInteger.INSTANCE;
}

@Override
public Integer getMaxLength() {
return PInteger.INSTANCE.getMaxLength(null);
}

@Override
public Integer getScale() {
return PInteger.INSTANCE.getScale(null);
}

@Override
public SortOrder getSortOrder() {
return SortOrder.getDefault();
}

}, false, SortOrder.getDefault());
}

List rowKeyColumnRangesList=Arrays.asList(  
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), 
true, PInteger.INSTANCE.toBytes(4), true)),
Arrays.asList(
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), 
true, PInteger.INSTANCE.toBytes(10), true))
);

SkipScanFilter skipScanFilter=new 
SkipScanFilter(rowKeyColumnRangesList, builder.build());

System.out.println(skipScanFilter);

byte[] rowKey=ByteUtil.concat(
PInteger.INSTANCE.toBytes(2), 
PInteger.INSTANCE.toBytes(7),
PInteger.INSTANCE.toBytes(11));
KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
"\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
}
{code}

Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
second column 7, which match the second range [7] of SkipScanFilter's second 
slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
bigger than the third slot range [9 - 10],so position[2] is 0 and we begin to 
backtrack to second column, because the second range [7] of SkipScanFilter's 
second slot is singleKey and there is no more range,so position[1] is 0 and we 
continue to backtrack to first column, because the first slot range [1-4] is 
not singleKey so we stop backtracking at first column.

Now the problem comes, in following line 448 of SkipScanFilter.navigate 
method,SkipScanFilter.setStartKey method is invoked,first copy rowKey columns 
before ptr to SkipScanFilter.startKey, because now ptr still point to the third 
column, so copy the first and second columns to 
SkipScanFilter.startKey,SkipScanFilter.startKey is [2,7]  after this step , 
then setStartKey method copy the lower bound SkipScanFilter.slots from j+1 
column, accoring to SkipScanFilter.position array,now j is 0, and both 
position[1] and position[2] are 0,so SkipScanFilter.startKey becomes [2,7,5,9], 
and in following line 457, ByteUtil.nextKey is invoked on [2,7], [2,7] is 
incremented to [2,8], finally SkipScanFilter.startKey is [2,8,5,9].

{code} 
448int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
449// From here on, we use startKey as our buffer 
(resetting minOffset and maxOffset)
450// We've copied the part of the current key above that 
we need into startKey
451// 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3705:
--
Description: 
See following simple unit test first,the rowKey is composed of three PInteger 
columns,and the slots of SkipScanFilter are:
[ [[1 - 4]], [5, 7], [[9 - 10]] ],
When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose rowKey 
is [2,7,11], obviously SkipScanFilter.filterKeyValue
returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
returns  [2,8,5,9] , a very strange value, the unit tests failed.
{code} 
@Test
public void testNavigate() {
RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
for(int i=0;i<3;i++) {
builder.addField(
new PDatum() {

@Override
public boolean isNullable() {
return false;
}

@Override
public PDataType getDataType() {
return PInteger.INSTANCE;
}

@Override
public Integer getMaxLength() {
return PInteger.INSTANCE.getMaxLength(null);
}

@Override
public Integer getScale() {
return PInteger.INSTANCE.getScale(null);
}

@Override
public SortOrder getSortOrder() {
return SortOrder.getDefault();
}

}, false, SortOrder.getDefault());
}

List rowKeyColumnRangesList=Arrays.asList(  
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), 
true, PInteger.INSTANCE.toBytes(4), true)),
Arrays.asList(
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), 
true, PInteger.INSTANCE.toBytes(10), true))
);

SkipScanFilter skipScanFilter=new 
SkipScanFilter(rowKeyColumnRangesList, builder.build());

System.out.println(skipScanFilter);

byte[] rowKey=ByteUtil.concat(
PInteger.INSTANCE.toBytes(2), 
PInteger.INSTANCE.toBytes(7),
PInteger.INSTANCE.toBytes(11));
KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);
assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals(
"\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
}
{code}

Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
second column 7, which match the second range [7] of SkipScanFilter's second 
slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
bigger than the third slot range [9 - 10],so position[2] is 0 and we begin to 
backtrack to second column, because the second range [7] of SkipScanFilter's 
second slot is singleKey and there is no more range,so position[1] is 0 and we 
continue to backtrack to first column, because the first slot range [1-4] is 
not singleKey so we stop backtracking at first column.

Now the problem comes, in following line 448 of SkipScanFilter.navigate 
method,SkipScanFilter.setStartKey method is invoked,first copy rowKey columns 
before ptr to SkipScanFilter.startKey, because now ptr still point to the third 
column, so copy the first and second columns to 
SkipScanFilter.startKey,SkipScanFilter.startKey is [2,7]  after this step , 
then setStartKey method copy the lower bound SkipScanFilter.slots from j+1 
column, accoring to SkipScanFilter.position array,now j is 0, and both 
position[1] and position[2] are 0,so SkipScanFilter.startKey becomes [2,7,5,9], 
and in following line 457, ByteUtil.nextKey is invoked on [2,7], [2,7] is 
incremented to [2,8], finally SkipScanFilter.startKey is [2,8,5,9].

{code} 
448   int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
449// From here on, we use startKey as our buffer 
(resetting minOffset and maxOffset)
450   // We've copied the part of the current key above that we 
need into startKey
451// 

[jira] [Updated] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-3705:
--
Description: 
See following simple unit test first,the rowKey is composed of three PInteger 
columns,and the slots of SkipScanFilter are:
[ [[1 - 4]], [5, 7], [[9 - 10]] ],
When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose rowKey 
is [2,7,11], obviously SkipScanFilter.filterKeyValue
returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
returns  [2,8,5,9] , a very strange value, the unit tests failed.
{code} 
@Test
public void testNavigate() {
RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
for(int i=0;i<3;i++) {
builder.addField(
new PDatum() {

@Override
public boolean isNullable() {
return false;
}

@Override
public PDataType getDataType() {
return PInteger.INSTANCE;
}

@Override
public Integer getMaxLength() {
return PInteger.INSTANCE.getMaxLength(null);
}

@Override
public Integer getScale() {
return PInteger.INSTANCE.getScale(null);
}

@Override
public SortOrder getSortOrder() {
return SortOrder.getDefault();
}

}, false, SortOrder.getDefault());
}

List rowKeyColumnRangesList=Arrays.asList(  
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), 
true, PInteger.INSTANCE.toBytes(4), true)),
Arrays.asList(
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), 
true, PInteger.INSTANCE.toBytes(10), true))
);

SkipScanFilter skipScanFilter=new 
SkipScanFilter(rowKeyColumnRangesList, builder.build());

System.out.println(skipScanFilter);

byte[] rowKey=ByteUtil.concat(
PInteger.INSTANCE.toBytes(2), 
PInteger.INSTANCE.toBytes(7),
PInteger.INSTANCE.toBytes(11));
KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);

assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals("\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
}
{code}

Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
second column 7, which match the second range [7] of SkipScanFilter's second 
slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
bigger than the third slot range [9 - 10],so position[2] is 0 and we begin to 
backtrack to second column, because the second range [7] of SkipScanFilter's 
second slot is singleKey and there is no more range,so position[1] is 0 and we 
continue to backtrack to first column, because the first slot range [1-4] is 
not singleKey so we stop backtracking at first column.

Now the problem comes, in following line 448 of SkipScanFilter.navigate 
method,SkipScanFilter.setStartKey method is invoked,first copy rowKey columns 
before ptr to SkipScanFilter.startKey, because now ptr still point to the third 
column, so copy the first and second columns to 
SkipScanFilter.startKey,SkipScanFilter.startKey is [2,7]  after this step , 
then setStartKey method copy the lower bound SkipScanFilter.slots from j+1 
column, accoring to SkipScanFilter.position array,now j is 0, and both 
position[1] and position[2] are 0,so SkipScanFilter.startKey becomes [2,7,5,9], 
and in following line 457, ByteUtil.nextKey is invoked on [2,7], [2,7] is 
incremented to [2,8], finally SkipScanFilter.startKey is [2,8,5,9].

{code} 
448   int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
449// From here on, we use startKey as our buffer 
(resetting minOffset and maxOffset)
450   // We've copied the part of the current key above that we 
need into startKey
451// Reinitialize the 

[jira] [Resolved] (PHOENIX-3676) Support CASE statements in Phoenix-Calcite

2017-03-01 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-3676.
--
Resolution: Fixed

> Support CASE statements in Phoenix-Calcite
> --
>
> Key: PHOENIX-3676
> URL: https://issues.apache.org/jira/browse/PHOENIX-3676
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3676.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3705) SkipScanFilter may repeatedly copy rowKey Columns to startKey

2017-03-01 Thread chenglei (JIRA)
chenglei created PHOENIX-3705:
-

 Summary: SkipScanFilter may repeatedly copy rowKey Columns to 
startKey
 Key: PHOENIX-3705
 URL: https://issues.apache.org/jira/browse/PHOENIX-3705
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.9.0
Reporter: chenglei


See following simple unit test first,the rowKey is composed of three PInteger 
columns,and the slots of SkipScanFilter are:
[[[1 - 4]], [5, 7], [[9 - 10]]],
When SkipScanFilter.filterKeyValue method is invoked on a KeyValue whose rowKey 
is [2,7,11], obviously SkipScanFilter.filterKeyValue
returns ReturnCode.SEEK_NEXT_USING_HINT and SkipScanFilter.getNextCellHint 
returns  [3,5,9], but unfortunately, SkipScanFilter.getNextCellHint actually 
returns  [2,8,5,9] , a very strange value, the unit tests failed.
{code} 
@Test
public void testNavigate() {
RowKeySchemaBuilder builder = new RowKeySchemaBuilder(3);
for(int i=0;i<3;i++) {
builder.addField(
new PDatum() {

@Override
public boolean isNullable() {
return false;
}

@Override
public PDataType getDataType() {
return PInteger.INSTANCE;
}

@Override
public Integer getMaxLength() {
return PInteger.INSTANCE.getMaxLength(null);
}

@Override
public Integer getScale() {
return PInteger.INSTANCE.getScale(null);
}

@Override
public SortOrder getSortOrder() {
return SortOrder.getDefault();
}

}, false, SortOrder.getDefault());
}

List rowKeyColumnRangesList=Arrays.asList(  
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(1), 
true, PInteger.INSTANCE.toBytes(4), true)),
Arrays.asList(
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(5)),
KeyRange.getKeyRange(PInteger.INSTANCE.toBytes(7))),
Arrays.asList(
PInteger.INSTANCE.getKeyRange(PInteger.INSTANCE.toBytes(9), 
true, PInteger.INSTANCE.toBytes(10), true))
);

SkipScanFilter skipScanFilter=new 
SkipScanFilter(rowKeyColumnRangesList, builder.build());

System.out.println(skipScanFilter);

byte[] rowKey=ByteUtil.concat(
PInteger.INSTANCE.toBytes(2), 
PInteger.INSTANCE.toBytes(7),
PInteger.INSTANCE.toBytes(11));
KeyValue keyValue=KeyValue.createFirstOnRow(rowKey);
ReturnCode returnCode=skipScanFilter.filterKeyValue(keyValue);
assertTrue(returnCode == ReturnCode.SEEK_NEXT_USING_HINT);
Cell nextCellHint=skipScanFilter.getNextCellHint(keyValue);

assertTrue(Bytes.toStringBinary(CellUtil.cloneRow(nextCellHint)).equals("\\x80\\x00\\x00\\x03\\x80\\x00\\x00\\x05\\x80\\x00\\x00\\x09"));
}
{code}

Let us see what's wrong, first column of rowKey [2,7,11 ] is 2, which is in 
SkipScanFilter's first slot range [1-4], so position[0] is 0 and we go to the 
second column 7, which match the second range [7] of SkipScanFilter's second 
slot [5, 7],so position[1] is 1 and we go to the third column 11, which is 
bigger than the third slot range [9 - 10],so position[2] is 0 and we begin to 
backtrack to second column, because the second range [7] of SkipScanFilter's 
second slot is singleKey and there is no more range,so position[1] is 0 and we 
continue to backtrack to first column, because the first slot range [1-4] is 
not singleKey so we stop backtracking at first column.

Now the problem comes, in following line 448 of SkipScanFilter.navigate 
method,SkipScanFilter.setStartKey method is invoked,first copy rowKey columns 
before ptr to SkipScanFilter.startKey, because now ptr still point to the third 
column, so copy the first and second columns to 
SkipScanFilter.startKey,SkipScanFilter.startKey is [2,7]  after this step , 
then setStartKey method copy the lower bound SkipScanFilter.slots from j+1 
column, accoring to SkipScanFilter.position array,now j is 0, and both 
position[1] and position[2] are 0,so SkipScanFilter.startKey becomes [2,7,5,9], 
and in following line 457, ByteUtil.nextKey is invoked on [2,7], [2,7] is 
incremented to [2,8], finally SkipScanFilter.startKey is [2,8,5,9].

{code} 
448   int currentLength = setStartKey(ptr, minOffset, j+1, 
nSlots, false);
449// From here on, we use startKey as our buffer 
(resetting 

[jira] [Updated] (PHOENIX-3704) Table in PhoenixTable#tableMapping is getting replaced by original PTable after commit in Phoenix-Calcite

2017-03-01 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3704:
-
Attachment: PHOENIX-3704.patch

Here is the patch going to commit.

> Table in PhoenixTable#tableMapping is getting replaced by original PTable 
> after commit in Phoenix-Calcite
> -
>
> Key: PHOENIX-3704
> URL: https://issues.apache.org/jira/browse/PHOENIX-3704
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
> Attachments: PHOENIX-3704.patch
>
>
> The table in PhoenixTable#tableMapping is replaced by original PTable in 
> MutationState#validate which affect the upserts to multi tenant tables from 
> non tenant connection. 
> FYI [~maryannxue]. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3704) Table in PhoenixTable#tableMapping is getting replaced by original PTable after commit in Phoenix-Calcite

2017-03-01 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3704:
-
Summary: Table in PhoenixTable#tableMapping is getting replaced by original 
PTable after commit in Phoenix-Calcite  (was: Table in 
PhoenixTable#tableMapping is replaced by original PTable after commit in 
Phoenix-Calcite)

> Table in PhoenixTable#tableMapping is getting replaced by original PTable 
> after commit in Phoenix-Calcite
> -
>
> Key: PHOENIX-3704
> URL: https://issues.apache.org/jira/browse/PHOENIX-3704
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
>
> The table in PhoenixTable#tableMapping is replaced by original PTable in 
> MutationState#validate which affect the upserts to multi tenant tables from 
> non tenant connection. 
> FYI [~maryannxue]. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3704) Table in PhoenixTable#tableMapping is replaced by original PTable after commit in Phoenix-Calcite

2017-03-01 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-3704:


 Summary: Table in PhoenixTable#tableMapping is replaced by 
original PTable after commit in Phoenix-Calcite
 Key: PHOENIX-3704
 URL: https://issues.apache.org/jira/browse/PHOENIX-3704
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla


The table in PhoenixTable#tableMapping is replaced by original PTable in 
MutationState#validate which affect the upserts to multi tenant tables from non 
tenant connection. 
FYI [~maryannxue]. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3704) Table in PhoenixTable#tableMapping is getting replaced by original PTable after commit in Phoenix-Calcite

2017-03-01 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3704:
-
Labels: calcite  (was: )

> Table in PhoenixTable#tableMapping is getting replaced by original PTable 
> after commit in Phoenix-Calcite
> -
>
> Key: PHOENIX-3704
> URL: https://issues.apache.org/jira/browse/PHOENIX-3704
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: calcite
>
> The table in PhoenixTable#tableMapping is replaced by original PTable in 
> MutationState#validate which affect the upserts to multi tenant tables from 
> non tenant connection. 
> FYI [~maryannxue]. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3689) Not determinist order by with limit

2017-03-01 Thread Arthur (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arthur updated PHOENIX-3689:

Description: 
The following request does not return the last value of table TT:
select * from TT order by dt desc limit 1;
Adding a 'group by dt' clause gets back the good result.

I noticed that an order by with 'limit 10' returns a merge of 10 results from 
each region and not 10 results of the whole request.

So 'order by' is not determinist. It is a bug or a feature ?

Here is my DDL:
{noformat}
CREATE TABLE TT (dt timestamp NOT NULL, message bigint NOT NULL, id varchar(20) 
NOT NULL, version varchar CONSTRAINT PK PRIMARY KEY (dt, message, id));
{noformat}

The issue occurs with a lot of data. I think the 'order by' clause is done by 
region and not for the whole result, so limit 1 returns the first region that 
answers and phoenix cache it. With only one region, this does not occur.

This script generate enough data to throw the issue:
{code}
#!/usr/bin/python

import string
from datetime import datetime, timedelta

dt = datetime(2017, 1, 1, 3)
with open('data.csv', 'w') as file:
for i in range(0, 1000):
newdt = dt + timedelta(microseconds=i*1)
file.write("{};{};{};\n".format(datetime.strftime(newdt, 
"%Y-%m-%d %H:%M:%S.%f"), 91 if i  % 10  == 0 else 100, str(i).zfill(20)))
{code}

With this data set, the last data is : 2017-01-02 06:46:39.99

Result with order by clause is not the last value:
{noformat}
select dt from TT order by dt desc limit 1;
+--+
|DT|
+--+
| 2017-01-01 07:54:40.730  |
{noformat}

Correct result is given when using group by, but I need to get all columns.
{noformat}
select dt from TT group by dt order by dt  desc limit 1;
+--+
|DT|
+--+
| 2017-01-02 06:46:39.990  |
+--+
{noformat}

I use a subquery as a workaroud, but performance are not good.
{noformat}
select * from TT where dt = ANY(select dt from TT group by dt order by dt desc 
limit 1);
+--+--+---+--+
|DT| MESSAGE  |  ID   | VERSION  |
+--+--+---+--+
| 2017-01-02 06:46:39.990  | 100  | 0999  |  |
+--+--+---+--+
1 row selected (8.393 seconds)
{noformat}

  was:
The following request does not return the last value of table TT:
select * from TT order by dt desc limit 1;
Adding a 'group by dt' clause gets back the good result.

I noticed that an order by with 'limit 10' returns a merge of 10 results from 
each region and not 10 results of the whole request.

So 'order by' is not determinist. It is a bug or a feature ?

Here is my DDL:
{noformat}
CREATE TABLE TT (dt timestamp NOT NULL, message bigint NOT NULL, id varchar(20) 
NOT NULL, version varchar CONSTRAINT PK PRIMARY KEY (dt, message, id));
{noformat}

The issue occurs with a lot of data. I think the 'order by' clause is done by 
region and not for the whole result, so limit 1 returns the first region that 
answers and phoenix cache it. With only one region, this does not occur.

This script generate enough data to throw the issue:
{code}
#!/usr/bin/python

import string
from datetime import datetime, timedelta

dt = datetime(2017, 1, 1, 3)
with open('data.csv', 'w') as file:
for i in range(0, 1000):
newdt = dt + timedelta(microseconds=i*1)
file.write("{};{};{};\n".format(datetime.strftime(newdt, 
"%Y-%m-%d %H:%M:%S.%f"), 91 if i  % 10  == 0 else 100, str(i).zfill(20)))
{code}

With this data set, the last data is : 2017-01-02 06:46:39.99

Result with order by clause is not the last value:
{noformat}
select dt from TT order by dt desc limit 1;
+--+
|DT|
+--+
| 2017-01-01 07:54:40.730  |
{noformat}

Correct result is given when using group by, but I need to get all columns.
{noformat}
select dt from TT group by dt order by dt  desc limit 1;
+--+
|DT|
+--+
| 2017-01-02 06:46:39.990  |
+--+
{noformat}


> Not determinist order by with limit
> ---
>
> Key: PHOENIX-3689
> URL: https://issues.apache.org/jira/browse/PHOENIX-3689
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Arthur
>
> The following request does not return the last value of table TT:
> select * from TT order by dt desc limit 1;
> Adding a 'group by dt' clause gets back the good result.
> I noticed that an order by with 

[jira] [Commented] (PHOENIX-3583) Prepare IndexMaintainer on server itself

2017-03-01 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889845#comment-15889845
 ] 

Ankit Singhal commented on PHOENIX-3583:


bq. We have logic on the client-side in MutationState that detects this 
condition. We also have logic in our index building code for this (and tests as 
well). If there's a known issue, we should fix it.

can you please point me to the code where we are checking for new Index before 
building Index mutations at server?

> Prepare IndexMaintainer on server itself
> 
>
> Key: PHOENIX-3583
> URL: https://issues.apache.org/jira/browse/PHOENIX-3583
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-3583.patch
>
>
> -- reuse the cache of PTable and it's lifecycle.
> -- With the new implementation, we will be doing RPC to meta table per mini 
> batch which could be an overhead, but the same configuration 
> "updateCacheFrequency" can be used to control a frequency of touching 
> SYSTEM.CATALOG endpoint for updated Ptable or index maintainers. 
> -- It is expected that 99% of the time the table is old and RPC will be 
> returned with an empty result(so it may be less costly), as opposed to the 
> current implementation where we have to send the index maintainer payload to 
> each region server per upsert batch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-01 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889837#comment-15889837
 ] 

Ankit Singhal commented on PHOENIX-3649:


bq. Does PHOENIX-3271 set the time stamp of the distributed upsert to the time 
stamp of when the query was started/compiled? We'd want to pass the time stamp 
over from the client so that we're consistent across all region servers. If the 
time stamp is set correctly, then 
ImmutableIndexIT#testCreateIndexDuringUpsertSelect should be ok.
No, we don't pass the timestamp of compilation as I thought it was needed only 
to cap the query to not read the new data but with read isolation, we should 
not need this right? or do you want updates to go at client timestamp even if 
SCN is also not set? we can't run UPSERT SELECT on the server for immutable 
tables having indexes because Index maintenance of immutable is still handled 
at the client.

bq. Otherwise, if it's not working for immutable tables, I'd expect it's not 
working for mutable tables either
Yes, there will be the same problem if the mutable index is created during 
Upsert Select on the table.

But currently also we have this problem right when the batch is sent to the 
server with index maintainers (in cache or with mutations) then Index created 
during that time will not get the updates on the fly. see PHOENIX-3583.

> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)