Re: Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread Ankit Singhal
Hi Anupama,

Option 1:-
You can create a ASYNC index so that WAL can be replayed. And once your
regions are up , remember to do the flush of data table before dropping the
index.

Option 2:-
Create a table in hbase with the same name as index table name by using
hbase shell.

Regards,
Ankit Singhal


On Tue, Jun 14, 2016 at 11:19 PM, anupama agarwal  wrote:

> Hi All,
>
> I have hit this error in phoenix, Phoenix-2915. It could be possible ,
> that there are some index writes in WAL which are not replayed and the
> index is dropped.
>
> And, now the table is not there, these writes cannot be replayed which
> result in data table regions also to not come up. My data region is in
> FAILED_TO_OPEN state. I have tried recreating the index, and still region
> is not able to come up. I realise that this has been fixed in new version
> of phoenix, but I am currently on phoenix 4.6 and Hbase 1.0. Can you please
> suggest a solution?
>


Re: Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread Ankit Singhal
Yes, restart your cluster

On Wed, Jun 15, 2016 at 8:17 AM, anupama agarwal  wrote:

> I have created async index with same name. But I am still getting the same
> error. Should I restart my cluster for changes to reflect?
> On Jun 15, 2016 8:38 PM, "Ankit Singhal"  wrote:
>
>> Hi Anupama,
>>
>> Option 1:-
>> You can create a ASYNC index so that WAL can be replayed. And once your
>> regions are up , remember to do the flush of data table before dropping the
>> index.
>>
>> Option 2:-
>> Create a table in hbase with the same name as index table name by using
>> hbase shell.
>>
>> Regards,
>> Ankit Singhal
>>
>>
>> On Tue, Jun 14, 2016 at 11:19 PM, anupama agarwal 
>> wrote:
>>
>>> Hi All,
>>>
>>> I have hit this error in phoenix, Phoenix-2915. It could be possible ,
>>> that there are some index writes in WAL which are not replayed and the
>>> index is dropped.
>>>
>>> And, now the table is not there, these writes cannot be replayed which
>>> result in data table regions also to not come up. My data region is in
>>> FAILED_TO_OPEN state. I have tried recreating the index, and still region
>>> is not able to come up. I realise that this has been fixed in new version
>>> of phoenix, but I am currently on phoenix 4.6 and Hbase 1.0. Can you please
>>> suggest a solution?
>>>
>>
>>


[jira] [Resolved] (PHOENIX-2992) Remove ORDER BY from aggregate-only SELECT statements

2016-06-15 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-2992.

Resolution: Fixed

Committed to master and 4.x*

Thanks for looking [~giacomotaylor]

> Remove ORDER BY from aggregate-only SELECT statements
> -
>
> Key: PHOENIX-2992
> URL: https://issues.apache.org/jira/browse/PHOENIX-2992
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: 2989-orderby-v3.txt, 2992-v1.txt
>
>
> In PHOENIX-2989 we observe that any ORDER BY clause can simply be removed 
> from any statement that only SELECTs on COUNT(DISTINCT ...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2208) Navigation to trace information in tracing UI should be driven off of query instead of trace ID

2016-06-15 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2208:
--
Attachment: Query-builder.png

MockUI for Query Builder is attached. Options to turn trace on, execute the 
query and turn trace off are given. For the result set timeline,list, 
dependency tree and distribution can be shown.

> Navigation to trace information in tracing UI should be driven off of query 
> instead of trace ID
> ---
>
> Key: PHOENIX-2208
> URL: https://issues.apache.org/jira/browse/PHOENIX-2208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Nishani 
> Attachments: Query-builder.png
>
>
> Instead of driving the trace UI based on the trace ID, we should drive it off 
> of the query string. Something like a drop down list that shows the query 
> string of the last N queries which can be selected from, with a search box 
> for a regex query string and perhaps time range that would search for the 
> trace ID under the covers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread anupama agarwal
This didn't work Ankit. Please find below detailed logs on region server.
Please let me know what else I can try. When I try to run hbck command,
this is the error I get:

ERROR: There is a hole in the region chain between \x06 and \x07.  You need
to create a new .regioninfo and region dir in hdfs to plug the hole.


2016-06-15 21:53:41,132 ERROR org.apache.hadoop.hbase.client.AsyncProcess:
Cannot get replica 0 location for
{"totalColumns":25,"families":{"0":[{"timestamp":1465911182647,"tag":[],"qualifier":"_0","vlen":0},{"timestamp":1465911182647,"tag":[],"qualifier":"0:ACCRUAL_GROUP","vlen":8},{"timestamp":1465911182647,"tag":[],"qualifier":"0:EXTERNAL_REF_ID","vlen":10},{"timestamp":1465911182647,"tag":[],"qualifier":"0:PARTY_ID_FROM","vlen":4}]},"row":"\\x11PlatformServiceItem-mp-S177340305-OD306272384176181000-3627238417618100-FORWARD-S177340305\\x00RevenueAccrual\\x00fkmp\\x00fkmpra2016061401292e1933e3cec42"}

2016-06-15 21:53:41,140 ERROR org.apache.hadoop.hbase.client.AsyncProcess:
Cannot get replica 0 location for
{"ts":1465911182647,"totalColumns":12,"families":{"0":[{"timestamp":1465911182647,"tag":[],"qualifier":"0:ACCRUAL_REF_6","vlen":0},{"timestamp":1465911182647,"tag":[],"qualifier":"0:ACCRUAL_REF_10","vlen":0},{"timestamp":1465911182647,"tag":[],"qualifier":"0:ACCRUAL_REF_9","vlen":0},{"timestamp":1465911182647,"tag":[],"qualifier":"0:ACCRUAL_REF_8","vlen":0}]},"row":"\\x11PlatformServiceItem-mp-S177340305-OD306272384176181000-3627238417618100-FORWARD-S177340305\\x00RevenueAccrual\\x00fkmp\\x00fkmpra2016061401292e1933e3cec42"}

2016-06-15 21:53:41,216 ERROR
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open
of
region=APL_ACCRUAL_SERVICE.ACCRUALS,\x06,1462851156716.eaee593f66a1ef282a2b5ae352624982.,
starting to roll back the global memstore size.

org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:
Failed to write to multiple index tables

at
org.apache.phoenix.hbase.index.write.recovery.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:220)

at
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:179)

at
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:169)

at org.apache.phoenix.hbase.index.Indexer.preWALRestore(Indexer.java:545)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$58.call(RegionCoprocessorHost.java:1422)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1663)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1738)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1695)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preWALRestore(RegionCoprocessorHost.java:1413)

at
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEdits(HRegion.java:3940)

at
org.apache.hadoop.hbase.regionserver.HRegion.replayRecoveredEditsIfAny(HRegion.java:3797)

at
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionStores(HRegion.java:969)

at
org.apache.hadoop.hbase.regionserver.HRegion.initializeRegionInternals(HRegion.java:841)

at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:814)

at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5828)

at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5794)

at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5765)

at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5721)

at
org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:5672)

at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:356)

at
org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:126)

at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

2016-06-15 21:53:41,220 INFO
org.apache.hadoop.hbase.coordination.ZkOpenRegionCoordination: Opening of
region {ENCODED => eaee593f66a1ef282a2b5ae352624982, NAME =>
'APL_ACCRUAL_SERVICE.ACCRUALS,\x06,1462851156716.eaee593f66a1ef282a2b5ae352624982.',
STARTKEY => '\x06', ENDKEY => '\x07'} failed, transitioning from OPENING to
FAILED_OPEN in ZK, expecting version 22

2016-06-15 21:57:50,092 INFO
org.apache.hadoop.hbase.io.hfile.LruBlockCache: totalSize=8.37 MB,
freeSize=7.97 GB, max=7.98 GB, blockCount=0, accesses=0, hits=0,
hitRatio=0, cachingAccesses=0, cachingHits=0,
cachingHitsRatio=0,evictions=29, evicted=0, evictedPerRun=0.0



On Wed, Jun 15, 2016 at 8:48 PM, Ankit Singhal 
wrote:

> Yes, restart your cluster
>
> On Wed, Jun 15, 2016 at 8:17 AM, anupama agarwal 
> wrote:
>
>> I have

Re: Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread anupama agarwal
I have created async index with same name. But I am still getting the same
error. Should I restart my cluster for changes to reflect?
On Jun 15, 2016 8:38 PM, "Ankit Singhal"  wrote:

> Hi Anupama,
>
> Option 1:-
> You can create a ASYNC index so that WAL can be replayed. And once your
> regions are up , remember to do the flush of data table before dropping the
> index.
>
> Option 2:-
> Create a table in hbase with the same name as index table name by using
> hbase shell.
>
> Regards,
> Ankit Singhal
>
>
> On Tue, Jun 14, 2016 at 11:19 PM, anupama agarwal 
> wrote:
>
>> Hi All,
>>
>> I have hit this error in phoenix, Phoenix-2915. It could be possible ,
>> that there are some index writes in WAL which are not replayed and the
>> index is dropped.
>>
>> And, now the table is not there, these writes cannot be replayed which
>> result in data table regions also to not come up. My data region is in
>> FAILED_TO_OPEN state. I have tried recreating the index, and still region
>> is not able to come up. I realise that this has been fixed in new version
>> of phoenix, but I am currently on phoenix 4.6 and Hbase 1.0. Can you please
>> suggest a solution?
>>
>
>


Dropping of Index can still leave some non-replayed writes Phoenix-2915

2016-06-15 Thread anupama agarwal
Hi All,

I have hit this error in phoenix, Phoenix-2915. It could be possible , that
there are some index writes in WAL which are not replayed and the index is
dropped.

And, now the table is not there, these writes cannot be replayed which
result in data table regions also to not come up. My data region is in
FAILED_TO_OPEN state. I have tried recreating the index, and still region
is not able to come up. I realise that this has been fixed in new version
of phoenix, but I am currently on phoenix 4.6 and Hbase 1.0. Can you please
suggest a solution?


[jira] [Commented] (PHOENIX-2979) ScannerLeaseRenewalIT tests are failing

2016-06-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332363#comment-15332363
 ] 

Sean Busbey commented on PHOENIX-2979:
--

I am getting this same failure locally, but it doesn't appear to be present on 
the linked ASF build job.

> ScannerLeaseRenewalIT tests are failing
> ---
>
> Key: PHOENIX-2979
> URL: https://issues.apache.org/jira/browse/PHOENIX-2979
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
>
> Tests in error:
>  ScannerLeaseRenewalIT.testRenewLeasePreventsSelectQueryFromFailing:135 » 
> PhoenixIO
>  ScannerLeaseRenewalIT.testRenewLeasePreventsUpsertSelectFromFailing:175 » 
> PhoenixIO
> For more detail, see recent test run: 
> https://builds.apache.org/job/Phoenix-master/1247/ and feel free to reassign 
> as necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2979) ScannerLeaseRenewalIT tests are failing

2016-06-15 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332368#comment-15332368
 ] 

Samarth Jain commented on PHOENIX-2979:
---

[~busbey] - what phoenix and hbase versions are you using?

> ScannerLeaseRenewalIT tests are failing
> ---
>
> Key: PHOENIX-2979
> URL: https://issues.apache.org/jira/browse/PHOENIX-2979
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Samarth Jain
> Fix For: 4.8.0
>
>
> Tests in error:
>  ScannerLeaseRenewalIT.testRenewLeasePreventsSelectQueryFromFailing:135 » 
> PhoenixIO
>  ScannerLeaseRenewalIT.testRenewLeasePreventsUpsertSelectFromFailing:175 » 
> PhoenixIO
> For more detail, see recent test run: 
> https://builds.apache.org/job/Phoenix-master/1247/ and feel free to reassign 
> as necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-15 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-2276:
--
Attachment: PHOENIX-2276.patch

Patch that fixes the NPE by changing the row key of shared indexes. I have 
manually tested that upgrading to 4.8 disables existing view and local indexes. 
The upgrade also successfully truncates the underlying hbase table. Also tested 
that rebuilding the indexes is repopulating the index table with the new row 
key. 

I am getting a test failure though in HashJoinMoreIT#testJoinWithMultitenancy. 
The query that is failing is doing a right join. The query with inner join is 
working successfully. Stacktrace:

{code}
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
joinId: dK�K_�. The cache might have expired and have been removed.
at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:98)
at 
org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:229)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:212)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1340)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1656)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1733)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1695)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1335)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3250)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31190)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)

at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:775)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:721)
at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at 
org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:778)
at 
org.apache.phoenix.end2end.HashJoinMoreIT.testJoinWithMultiTenancy(HashJoinMoreIT.java:575)
{code}

[~maryannxue] - any pointers as to why my change could be causing this failure. 
My patch basically changes the positions of index_id and tenant_id columns in 
the row key.




> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> 

[jira] [Created] (PHOENIX-2999) Upgrading Multi-tenant table to map with namespace using upgradeUtil

2016-06-15 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-2999:
--

 Summary: Upgrading Multi-tenant table to map with namespace using 
upgradeUtil
 Key: PHOENIX-2999
 URL: https://issues.apache.org/jira/browse/PHOENIX-2999
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal


currently upgradeUtil doesn't handle multi-tenant table with tenant views 
properly.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2999) Upgrading Multi-tenant table to map with namespace using upgradeUtil

2016-06-15 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332565#comment-15332565
 ] 

Ankit Singhal commented on PHOENIX-2999:


[~jamestaylor], for 4.8 release, should we prevent user to use upgradeUtil to 
upgrade Multi-tenant table?

> Upgrading Multi-tenant table to map with namespace using upgradeUtil
> 
>
> Key: PHOENIX-2999
> URL: https://issues.apache.org/jira/browse/PHOENIX-2999
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>
> currently upgradeUtil doesn't handle multi-tenant table with tenant views 
> properly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2535) Create shaded clients (thin + thick)

2016-06-15 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332571#comment-15332571
 ] 

Sergey Soldatov commented on PHOENIX-2535:
--

[~jamestaylor], [~elserj], [~enis], [~mujtabachohan] thank you for the 
review/comments. Committed to the master and 4.x branches.

> Create shaded clients (thin + thick) 
> -
>
> Key: PHOENIX-2535
> URL: https://issues.apache.org/jira/browse/PHOENIX-2535
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2535-1.patch, PHOENIX-2535-2.patch, 
> PHOENIX-2535-3.patch, PHOENIX-2535-4.patch, PHOENIX-2535-5.patch, 
> PHOENIX-2535-6.patch, PHOENIX-2535-7.patch
>
>
> Having shaded client artifacts helps greatly in minimizing the dependency 
> conflicts at the run time. We are seeing more of Phoenix JDBC client being 
> used in Storm topologies and other settings where guava versions become a 
> problem. 
> I think we can do a parallel artifact for the thick client with shaded 
> dependencies and also using shaded hbase. For thin client, maybe shading 
> should be the default since it is new? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2535) Create shaded clients (thin + thick)

2016-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332638#comment-15332638
 ] 

Hudson commented on PHOENIX-2535:
-

FAILURE: Integrated in Phoenix-master #1261 (See 
[https://builds.apache.org/job/Phoenix-master/1261/])
PHOENIX-2535 Create shaded clients (thin + thick) (ssa: rev 
4f6ee74c0a7b94282575300cfd698e78198685fb)
* phoenix-queryserver/pom.xml
* 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/PhoenixMetaFactoryImpl.java
* phoenix-assembly/pom.xml
* phoenix-assembly/src/build/client-minimal.xml
* phoenix-server/src/test/java/org/apache/phoenix/DriverCohabitationTest.java
* 
phoenix-queryserver-client/src/main/resources/META-INF/services/java.sql.Driver
* 
phoenix-server-client/src/main/resources/version/org-apache-phoenix-remote-jdbc.properties
* 
phoenix-queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerThread.java
* 
phoenix-queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
* phoenix-assembly/src/build/components-major-client.xml
* phoenix-server-client/src/build/thin-client.xml
* phoenix-server/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
* phoenix-server/src/main/java/org/apache/phoenix/queryserver/server/Main.java
* phoenix-assembly/src/build/components-minimal.xml
* phoenix-assembly/src/build/client-without-hbase.xml
* phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/Main.java
* 
phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/PhoenixMetaFactory.java
* 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/Driver.java
* phoenix-server/pom.xml
* phoenix-server-client/pom.xml
* 
phoenix-server/src/main/java/org/apache/phoenix/queryserver/server/PhoenixMetaFactory.java
* phoenix-server/src/build/query-server-runnable.xml
* phoenix-assembly/src/build/components/all-common-jars.xml
* bin/log4j.properties
* 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/ThinClientUtil.java
* phoenix-assembly/src/build/server.xml
* phoenix-server/src/it/java/org/apache/phoenix/end2end/QueryServerThread.java
* phoenix-server/src/it/resources/log4j.properties
* bin/phoenix_utils.py
* phoenix-hive/pom.xml
* 
phoenix-queryserver/src/test/java/org/apache/phoenix/DriverCohabitationTest.java
* phoenix-assembly/src/build/client.xml
* pom.xml
* 
phoenix-server-client/src/main/java/org/apache/phoenix/queryserver/client/ThinClientUtil.java
* bin/tephra
* phoenix-server-client/src/main/resources/META-INF/services/java.sql.Driver
* phoenix-queryserver-client/pom.xml
* phoenix-client/pom.xml
* phoenix-queryserver/src/build/query-server-runnable.xml
* 
phoenix-server-client/src/main/java/org/apache/phoenix/queryserver/client/Driver.java
* bin/queryserver.py
* phoenix-assembly/src/build/client-spark.xml
* 
phoenix-server/src/main/java/org/apache/phoenix/queryserver/server/PhoenixMetaFactoryImpl.java
* phoenix-queryserver/src/it/resources/log4j.properties
* 
phoenix-queryserver-client/src/main/resources/version/org-apache-phoenix-remote-jdbc.properties


> Create shaded clients (thin + thick) 
> -
>
> Key: PHOENIX-2535
> URL: https://issues.apache.org/jira/browse/PHOENIX-2535
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2535-1.patch, PHOENIX-2535-2.patch, 
> PHOENIX-2535-3.patch, PHOENIX-2535-4.patch, PHOENIX-2535-5.patch, 
> PHOENIX-2535-6.patch, PHOENIX-2535-7.patch
>
>
> Having shaded client artifacts helps greatly in minimizing the dependency 
> conflicts at the run time. We are seeing more of Phoenix JDBC client being 
> used in Storm topologies and other settings where guava versions become a 
> problem. 
> I think we can do a parallel artifact for the thick client with shaded 
> dependencies and also using shaded hbase. For thin client, maybe shading 
> should be the default since it is new? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2535) Create shaded clients (thin + thick)

2016-06-15 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332645#comment-15332645
 ] 

Josh Elser commented on PHOENIX-2535:
-

Great work, [~sergey.soldatov]. This is a great step in the right direction!

> Create shaded clients (thin + thick) 
> -
>
> Key: PHOENIX-2535
> URL: https://issues.apache.org/jira/browse/PHOENIX-2535
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2535-1.patch, PHOENIX-2535-2.patch, 
> PHOENIX-2535-3.patch, PHOENIX-2535-4.patch, PHOENIX-2535-5.patch, 
> PHOENIX-2535-6.patch, PHOENIX-2535-7.patch
>
>
> Having shaded client artifacts helps greatly in minimizing the dependency 
> conflicts at the run time. We are seeing more of Phoenix JDBC client being 
> used in Storm topologies and other settings where guava versions become a 
> problem. 
> I think we can do a parallel artifact for the thick client with shaded 
> dependencies and also using shaded hbase. For thin client, maybe shading 
> should be the default since it is new? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2535) Create shaded clients (thin + thick)

2016-06-15 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332785#comment-15332785
 ] 

Enis Soztutar commented on PHOENIX-2535:


Great work!. 

bq. Enis Soztutar Interesting question. Do we want to publish fat shaded jars 
for client or I can just skip the install phase for client and server. I don't 
remember for sure, but we were discussing that it would be good to publish the 
artifact for full client?
I think it will help. Some libraries are publishing fat-jars into maven which 
is very convenient for users. I think we should definitely do that. Let's open 
a follow up issue.

> Create shaded clients (thin + thick) 
> -
>
> Key: PHOENIX-2535
> URL: https://issues.apache.org/jira/browse/PHOENIX-2535
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2535-1.patch, PHOENIX-2535-2.patch, 
> PHOENIX-2535-3.patch, PHOENIX-2535-4.patch, PHOENIX-2535-5.patch, 
> PHOENIX-2535-6.patch, PHOENIX-2535-7.patch
>
>
> Having shaded client artifacts helps greatly in minimizing the dependency 
> conflicts at the run time. We are seeing more of Phoenix JDBC client being 
> used in Storm topologies and other settings where guava versions become a 
> problem. 
> I think we can do a parallel artifact for the thick client with shaded 
> dependencies and also using shaded hbase. For thin client, maybe shading 
> should be the default since it is new? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332791#comment-15332791
 ] 

James Taylor commented on PHOENIX-2276:
---

One minor fix on commit nit if you agree. How about changing this:
{code}
{+// skip salt and viewIndexId columns.
 int pkPosition = table.getBucketNum() == null ? 0 : 1;
+pkPosition = table.getViewIndexId() != null ? ++pkPosition : 
pkPosition;
{code}
to something a bit more readable like this:
{code}
{+// skip salt and viewIndexId columns.
 int pkPosition = (table.getBucketNum() == null ? 0 : 1) + 
(table.getViewIndexId() == null ? 0 : 1);
{code}

I don't see anything at first glance that would break right joins, but 
hopefully [~maryannxue] will.

I'd recommend committing this to master first, as the other branches will 
hopefully look like master very soon. File a separate JIRA for the right join 
breaking.



> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE INDEX " + globalViewIdx + 
> " ON " + globalView + " (PK3 DESC, KV3) INCLUDE (KV1)");
> }
> }
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.util.StringUtil.escapeBackslash(StringUtil.java:392)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler.compile(PostIndexDDLCompiler.java:78)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1027)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:903)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1321)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:315)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:306)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1375)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2982) Keep the number of active handlers balanced across salt buckets

2016-06-15 Thread Junegunn Choi (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332944#comment-15332944
 ] 

Junegunn Choi commented on PHOENIX-2982:


[~jamestaylor] Thanks for the comment.

bq. Are you in need of this kind of balancing due to usage our our Phoenix 
Query Server?

No, we're not yet considering Query Server. We've been running an HBase cluster 
for "semi-real-time" analytics, that is supposed to answer aggregation queries 
over a few months of data within several seconds. All the business logic is 
written in Java as Endpoint coprocessors and I'm currently evaluating Phoenix 
to see if it can effectively replace those hand-crafted coprocessors. One thing 
I noticed during the test is the problem described above, suboptimal resource 
utilization and handler exhaustion, which led me to this.

bq. Local indexes are very similar to salted tables - any thoughts around using 
the same technique for those?

I'm not familiar with that part of the code, but a cursory examination suggests 
the patch will also affect the scan over local index as it shares 
{{ParallelIterators}}. I'll see if I can get some performance numbers.

bq. we can setup our own RpcSchedulerFactory and RpcScheduler ... I'm curious 
if you've thought about this angle at all

The logic here deals with in which order the client submits scans to the 
regionservers, i.e. it tries not to put more burden on the loaded regionservers 
(technically, salt buckets). An RpcScheduler is local to regionserver and 
handles requests that are already submitted to the server, so I believe that it 
is for different types of problems. Please correct me if I'm wrong.

This patch does not change the round-robin processing of concurrent queries. It 
introduces another level of grouping inside a single query on a salted table. 
Let me try to delineate the conceptual hierarchy. Currently we use 
{{AbstractRoundRobinQueue}} to pick queries in round-robin fashion:

- AbstractRoundRobinQueue
-- Query 1
--- LinkedList of scans
-- Query 2
--- LinkedList of scans
-- ..

With the patch it becomes:

- AbstractRoundRobinQueue
-- Query 1
--- Salt bucket 1
 LinkedList of scans
--- Salt bucket 2
 LinkedList of scans
--- ...
-- Query 2
--- ...

A query is still picked in round-robin fashion as before (Q1 -> Q2 -> ...), but 
then we pick a scan for the query in the least-loaded salt bucket.

> Keep the number of active handlers balanced across salt buckets
> ---
>
> Key: PHOENIX-2982
> URL: https://issues.apache.org/jira/browse/PHOENIX-2982
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
> Attachments: PHOENIX-2982-v2.patch, PHOENIX-2982.patch, 
> cpu-util-with-patch.png, cpu-util-without-patch.png
>
>
> I'd like to discuss the idea of keeping the numbers of active handlers 
> balanced across the salt buckets during parallel scan by exposing the 
> counters to JobManager queue.
> h4. Background
> I was testing Phoenix on a 10-node test cluster. When I was running a few 
> full-scan queries simultaneously on a table whose {{SALT_BUCKETS}} is 100, I 
> noticed small queries such as {{SELECT * FROM T LIMIT 100}} are occasionally 
> blocked for up to tens of seconds due to the exhaustion of regionserver 
> handler threads.
> {{hbase.regionserver.handler.count}} was set to 100, which is much larger 
> than the default 30, so I didn't expect this to happen 
> ({{phoenix.query.threadPoolSize}} = 128 / 10 nodes * 4 queries =~ 50 < 100), 
> but from periodic thread dumps, I could observe that the numbers of active 
> handlers across the regionservers are often skewed during the execution.
> {noformat}
> # Obtained by periodic thread dumps
> # (key = regionserver ID, value = number of active handlers)
> 17:23:48: {1=>6,  2=>3,  3=>27, 4=>12, 5=>13, 6=>23, 7=>23, 8=>5,  9=>5,  
> 10=>10}
> 17:24:18: {1=>8,  2=>6,  3=>26, 4=>3,  5=>13, 6=>41, 7=>11, 8=>5,  9=>8,  
> 10=>5}
> 17:24:48: {1=>15, 3=>30, 4=>3,  5=>8,  6=>22, 7=>11, 8=>16, 9=>16, 10=>7}
> 17:25:18: {1=>6,  2=>12, 3=>37, 4=>6,  5=>4,  6=>2,  7=>21, 8=>10, 9=>24, 
> 10=>5}
> 17:25:48: {1=>4,  2=>9,  3=>48, 4=>14, 5=>2,  6=>7,  7=>18, 8=>16, 9=>2,  
> 10=>8}
> {noformat}
> Although {{ParallelIterators.submitWork}} shuffles the parallel scan tasks 
> before submitting them to {{ThreadPoolExecutor}}, there's currently no 
> mechanism to prevent the skew from happening during runtime.
> h4. Suggestion
> Maintain "active" counter for each salt bucket, and expose the numbers to 
> JobManager queue via specialized {{Producer}} implementation so that it can 
> choose a scan for the least loaded bucket.
> By doing so we can prevent the handler exhaustion problem described above and 
> can expect more consistent utilization of 

[jira] [Updated] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-15 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-2276:
-
Attachment: PHOENIX-2276-1.fix

[~samarthjain], [~jamestaylor], I think the reason is basically about different 
behaviors for setting the tenantId bytes in different classes. In this case, 
it's between BaseQueryPlan and ServerCacheClient. One is to set the cache and 
the other (BaseQueryPlan) is to read from the cache. We should probably wrap 
these lines in a method to ensure that the tenantId bytes are always set the 
same way.
Still there's another problem with removing the cache with this "fix" patch. If 
you guys allow me more time, I can probably fix it by tomorrow.

> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276-1.fix, PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE INDEX " + globalViewIdx + 
> " ON " + globalView + " (PK3 DESC, KV3) INCLUDE (KV1)");
> }
> }
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.util.StringUtil.escapeBackslash(StringUtil.java:392)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler.compile(PostIndexDDLCompiler.java:78)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1027)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:903)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1321)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:315)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:306)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1375)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2178) Tracing - total time listed for a certain trace does not correlate with query wall clock time

2016-06-15 Thread Pranavan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332982#comment-15332982
 ] 

Pranavan commented on PHOENIX-2178:
---

Colin has opened a jira for the nano time granularity. I am working on it. The 
jira link in HTrace - https://issues.apache.org/jira/browse/HTRACE-376

> Tracing - total time listed for a certain trace does not correlate with query 
> wall clock time
> -
>
> Key: PHOENIX-2178
> URL: https://issues.apache.org/jira/browse/PHOENIX-2178
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.5.0
>Reporter: Mujtaba Chohan
>  Labels: gsoc2016, tracing
>
> Wall clock for a count * takes over a large table takes 3+ms however 
> total sum(end_time - start_time) is less than 250ms for trace_id generated 
> for this count * query.
> {code}
> Output of trace table:
> select sum(end_time  - start_time),count(*), description from 
> SYSTEM.TRACING_STATS WHERE TRACE_ID=X group by description;
> +--+--+--+
> |   SUM((END_TIME - START_TIME))   | COUNT(1) 
> |   DESCRIPTION|
> +--+--+--+
> | 0| 3
> | ClientService.Scan   |
> | 240  | 253879   
> | HFileReaderV2.readBlock  |
> | 1| 1
> | Scanner opened on server |
> +--+--+--+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2276) Creating index on a global view on a multi-tenant table fails with NPE

2016-06-15 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15332989#comment-15332989
 ] 

James Taylor commented on PHOENIX-2276:
---

Yes, tomorrow is fine. Thanks, [~maryannxue]! Is there an inherent assumption 
that the tenantId is always the first field in the row key? If that's the case, 
we should check {{if PTable.getViewIndexId() != null}} as in that case it'll be 
the second field. You'd always want to skip the first field in this case, as 
it's the index id.

> Creating index on a global view on a multi-tenant table fails with NPE
> --
>
> Key: PHOENIX-2276
> URL: https://issues.apache.org/jira/browse/PHOENIX-2276
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>  Labels: SFDC
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2276-1.fix, PHOENIX-2276.patch
>
>
> {code}
> @Test
> public void testCreatingIndexOnGlobalView() throws Exception {
> String baseTable = "testRowTimestampColWithViews".toUpperCase();
> String globalView = "globalView".toUpperCase();
> String globalViewIdx = "globalView_idx".toUpperCase();
> long ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE TABLE " + baseTable + " 
> (TENANT_ID CHAR(15) NOT NULL, PK2 DATE NOT NULL, PK3 INTEGER NOT NULL, KV1 
> VARCHAR, KV2 VARCHAR, KV3 CHAR(15) CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK2 
> ROW_TIMESTAMP, PK3)) MULTI_TENANT=true");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE VIEW " + globalView + " AS 
> SELECT * FROM " + baseTable + " WHERE KV1 = 'KV1'");
> }
> ts = nextTimestamp();
> try (Connection conn = getConnection(ts)) {
> conn.createStatement().execute("CREATE INDEX " + globalViewIdx + 
> " ON " + globalView + " (PK3 DESC, KV3) INCLUDE (KV1)");
> }
> }
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.util.StringUtil.escapeBackslash(StringUtil.java:392)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler.compile(PostIndexDDLCompiler.java:78)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1027)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:903)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1321)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:315)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:306)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1375)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-2952) array_length return negative value

2016-06-15 Thread Joseph Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joseph Sun updated PHOENIX-2952:

Comment: was deleted

(was: org.apache.phoenix.schema.types.PArrayDataType
{code}
public static int serailizeOffsetArrayIntoStream(DataOutputStream oStream, 
TrustedByteArrayOutputStream byteStream,
int noOfElements, int maxOffset, int[] offsetPos) throws 
IOException {
int offsetPosition = (byteStream.size());
byte[] offsetArr = null;
boolean useInt = true;
if (PArrayDataType.useShortForOffsetArray(maxOffset)) {
offsetArr = new byte[PArrayDataType.initOffsetArray(noOfElements, 
Bytes.SIZEOF_SHORT)];
useInt = false;
} else {
offsetArr = new byte[PArrayDataType.initOffsetArray(noOfElements, 
Bytes.SIZEOF_INT)];
noOfElements = -noOfElements;  //need remove the line.
}
int off = 0;
if (useInt) {
for (int pos : offsetPos) {
Bytes.putInt(offsetArr, off, pos);
off += Bytes.SIZEOF_INT;
}
} else {
for (int pos : offsetPos) {
Bytes.putShort(offsetArr, off, (short)(pos - Short.MAX_VALUE));
off += Bytes.SIZEOF_SHORT;
}
}
oStream.write(offsetArr);
oStream.writeInt(offsetPosition);
return noOfElements;
}
{code})

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
>
> execute sql.
> {code}
> select 
> array_length(REGEXP_SPLIT('"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2""uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfas

[jira] [Commented] (PHOENIX-2952) array_length return negative value

2016-06-15 Thread Joseph Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15333019#comment-15333019
 ] 

Joseph Sun commented on PHOENIX-2952:
-

org.apache.phoenix.schema.types.PArrayDataType

{code}
public static int serailizeOffsetArrayIntoStream(DataOutputStream oStream, 
TrustedByteArrayOutputStream byteStream,
int noOfElements, int maxOffset, int[] offsetPos) throws 
IOException {
int offsetPosition = (byteStream.size());
byte[] offsetArr = null;
boolean useInt = true;
if (PArrayDataType.useShortForOffsetArray(maxOffset)) {
offsetArr = new byte[PArrayDataType.initOffsetArray(noOfElements, 
Bytes.SIZEOF_SHORT)];
useInt = false;
} else {
offsetArr = new byte[PArrayDataType.initOffsetArray(noOfElements, 
Bytes.SIZEOF_INT)];
noOfElements = -noOfElements;  //Convert to a negative value
}
int off = 0;
if (useInt) {
for (int pos : offsetPos) {
Bytes.putInt(offsetArr, off, pos);
off += Bytes.SIZEOF_INT;
}
} else {
for (int pos : offsetPos) {
Bytes.putShort(offsetArr, off, (short)(pos - Short.MAX_VALUE));
off += Bytes.SIZEOF_SHORT;
}
}
oStream.write(offsetArr);
oStream.writeInt(offsetPosition);
return noOfElements;
}
{/code}

Modify
{code}
public static int getArrayLength(ImmutableBytesWritable ptr, PDataType 
baseType, Integer maxLength) {
byte[] bytes = ptr.get();
if (baseType.isFixedWidth()) {
int elemLength = maxLength == null ? baseType.getByteSize() : 
maxLength;
return (ptr.getLength() / elemLength);
}
//@line 1019, array length save to a negative value.
return Math.abs(Bytes.toInt(bytes, (ptr.getOffset() + ptr.getLength() - 
(Bytes.SIZEOF_BYTE + Bytes.SIZEOF_INT;
}
{/code}

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
>
> execute sql.
> {code}
> select 
> array_length(REGEXP_SPLIT('"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2""uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsf

[jira] [Comment Edited] (PHOENIX-2952) array_length return negative value

2016-06-15 Thread Joseph Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15333019#comment-15333019
 ] 

Joseph Sun edited comment on PHOENIX-2952 at 6/16/16 3:55 AM:
--

org.apache.phoenix.schema.types.PArrayDataType

{code}
public static int serailizeOffsetArrayIntoStream(DataOutputStream oStream, 
TrustedByteArrayOutputStream byteStream,
int noOfElements, int maxOffset, int[] offsetPos) throws 
IOException {
int offsetPosition = (byteStream.size());
byte[] offsetArr = null;
boolean useInt = true;
if (PArrayDataType.useShortForOffsetArray(maxOffset)) {
offsetArr = new byte[PArrayDataType.initOffsetArray(noOfElements, 
Bytes.SIZEOF_SHORT)];
useInt = false;
} else {
offsetArr = new byte[PArrayDataType.initOffsetArray(noOfElements, 
Bytes.SIZEOF_INT)];
noOfElements = -noOfElements;  //Convert to a negative value
}
int off = 0;
if (useInt) {
for (int pos : offsetPos) {
Bytes.putInt(offsetArr, off, pos);
off += Bytes.SIZEOF_INT;
}
} else {
for (int pos : offsetPos) {
Bytes.putShort(offsetArr, off, (short)(pos - Short.MAX_VALUE));
off += Bytes.SIZEOF_SHORT;
}
}
oStream.write(offsetArr);
oStream.writeInt(offsetPosition);
return noOfElements;
}
{code}

Modify
{code}
public static int getArrayLength(ImmutableBytesWritable ptr, PDataType 
baseType, Integer maxLength) {
byte[] bytes = ptr.get();
if (baseType.isFixedWidth()) {
int elemLength = maxLength == null ? baseType.getByteSize() : 
maxLength;
return (ptr.getLength() / elemLength);
}
//@line 1019, array length save to a negative value.
return Math.abs(Bytes.toInt(bytes, (ptr.getOffset() + ptr.getLength() - 
(Bytes.SIZEOF_BYTE + Bytes.SIZEOF_INT;
}
{code}


was (Author: ryvius):
org.apache.phoenix.schema.types.PArrayDataType

{code}
public static int serailizeOffsetArrayIntoStream(DataOutputStream oStream, 
TrustedByteArrayOutputStream byteStream,
int noOfElements, int maxOffset, int[] offsetPos) throws 
IOException {
int offsetPosition = (byteStream.size());
byte[] offsetArr = null;
boolean useInt = true;
if (PArrayDataType.useShortForOffsetArray(maxOffset)) {
offsetArr = new byte[PArrayDataType.initOffsetArray(noOfElements, 
Bytes.SIZEOF_SHORT)];
useInt = false;
} else {
offsetArr = new byte[PArrayDataType.initOffsetArray(noOfElements, 
Bytes.SIZEOF_INT)];
noOfElements = -noOfElements;  //Convert to a negative value
}
int off = 0;
if (useInt) {
for (int pos : offsetPos) {
Bytes.putInt(offsetArr, off, pos);
off += Bytes.SIZEOF_INT;
}
} else {
for (int pos : offsetPos) {
Bytes.putShort(offsetArr, off, (short)(pos - Short.MAX_VALUE));
off += Bytes.SIZEOF_SHORT;
}
}
oStream.write(offsetArr);
oStream.writeInt(offsetPosition);
return noOfElements;
}
{/code}

Modify
{code}
public static int getArrayLength(ImmutableBytesWritable ptr, PDataType 
baseType, Integer maxLength) {
byte[] bytes = ptr.get();
if (baseType.isFixedWidth()) {
int elemLength = maxLength == null ? baseType.getByteSize() : 
maxLength;
return (ptr.getLength() / elemLength);
}
//@line 1019, array length save to a negative value.
return Math.abs(Bytes.toInt(bytes, (ptr.getOffset() + ptr.getLength() - 
(Bytes.SIZEOF_BYTE + Bytes.SIZEOF_INT;
}
{/code}

> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
>
> execute sql.
> {code}
> select 
> array_length(REGEXP_SPLIT('"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2""uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"0

[GitHub] phoenix pull request #174: PHOENIX-2952 fix array_length return nagetive val...

2016-06-15 Thread opsun
GitHub user opsun opened a pull request:

https://github.com/apache/phoenix/pull/174

PHOENIX-2952 fix array_length return nagetive value

https://issues.apache.org/jira/browse/PHOENIX-2952

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/opsun/phoenix 4.7.0-HBase-1.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/174.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #174


commit a46ce6f435ae516771130c8ee88f8a4a93047046
Author: opsun 
Date:   2016-06-16T04:24:08Z

PHOENIX-2952 fix array_length return nagetive value




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2952) array_length return negative value

2016-06-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15333056#comment-15333056
 ] 

ASF GitHub Bot commented on PHOENIX-2952:
-

GitHub user opsun opened a pull request:

https://github.com/apache/phoenix/pull/174

PHOENIX-2952 fix array_length return nagetive value

https://issues.apache.org/jira/browse/PHOENIX-2952

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/opsun/phoenix 4.7.0-HBase-1.1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/174.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #174


commit a46ce6f435ae516771130c8ee88f8a4a93047046
Author: opsun 
Date:   2016-06-16T04:24:08Z

PHOENIX-2952 fix array_length return nagetive value




> array_length return negative value
> --
>
> Key: PHOENIX-2952
> URL: https://issues.apache.org/jira/browse/PHOENIX-2952
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Joseph Sun
>  Labels: test
>
> execute sql.
> {code}
> select 
> array_length(REGEXP_SPLIT('"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2""uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-6902914b85d2",adsfadfadsf,adfadfasdf,adsfasdf,asdf,"uuid":"02c7b029-8638-4ca7-8098-69

[jira] [Created] (PHOENIX-3000) Reduce memory consumption during DISTINCT aggregation

2016-06-15 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-3000:
--

 Summary: Reduce memory consumption during DISTINCT aggregation
 Key: PHOENIX-3000
 URL: https://issues.apache.org/jira/browse/PHOENIX-3000
 Project: Phoenix
  Issue Type: Bug
Reporter: Lars Hofhansl


In {{DistinctValueWithCountServerAggregator.aggregate}} we hold on the ptr 
handed to us from HBase.
Note that this pointer points into an HFile Block, and hence we hold onto the 
entire block for the duration of the aggregation.

If the column has high cardinality we might attempt holding the entire table in 
memory in the extreme case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3000) Reduce memory consumption during DISTINCT aggregation

2016-06-15 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3000:
---
Attachment: 3000.txt

Here's a simple fix. Copy the key, unless it occupies more than 10% of the 
array handed to us. Or in other words, if we can save 90% of HEAP during 
aggregate we'll take the hit of making the copy.

In some tests I ran that made the difference between finishing and simply 
running out of memory.

Note that it is not quite as simple. If the block is cached in HBase anyway - 
and we're not using BLOCK_ENCODING - we would not consume more memory since we 
made a copy, but the block is on the heap anyway.


> Reduce memory consumption during DISTINCT aggregation
> -
>
> Key: PHOENIX-3000
> URL: https://issues.apache.org/jira/browse/PHOENIX-3000
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 3000.txt
>
>
> In {{DistinctValueWithCountServerAggregator.aggregate}} we hold on the ptr 
> handed to us from HBase.
> Note that this pointer points into an HFile Block, and hence we hold onto the 
> entire block for the duration of the aggregation.
> If the column has high cardinality we might attempt holding the entire table 
> in memory in the extreme case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3000) Reduce memory consumption during DISTINCT aggregation

2016-06-15 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15333239#comment-15333239
 ] 

Lars Hofhansl commented on PHOENIX-3000:


We should probably also keep track of the size of the map during aggregate and 
fail if we're over some threshold to keep the region server safe.


> Reduce memory consumption during DISTINCT aggregation
> -
>
> Key: PHOENIX-3000
> URL: https://issues.apache.org/jira/browse/PHOENIX-3000
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 3000.txt
>
>
> In {{DistinctValueWithCountServerAggregator.aggregate}} we hold on the ptr 
> handed to us from HBase.
> Note that this pointer points into an HFile Block, and hence we hold onto the 
> entire block for the duration of the aggregation.
> If the column has high cardinality we might attempt holding the entire table 
> in memory in the extreme case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)