[jira] [Commented] (PHOENIX-3917) RowProjector#getEstimatedRowByteSize() returns incorrect value

2017-06-07 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16041038#comment-16041038
 ] 

Ankit Singhal commented on PHOENIX-3917:


Yes [~gsbiju], change make sense to me.

> RowProjector#getEstimatedRowByteSize() returns incorrect value
> --
>
> Key: PHOENIX-3917
> URL: https://issues.apache.org/jira/browse/PHOENIX-3917
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>Priority: Minor
>
> {{queryPlan..getProjector().getEstimatedRowByteSize()}} returns "0" for a 
> query {{SELECT A_ID FROM TABLE}} where {{A_ID}} is Primary Key. Same is the 
> case for the query {{SELECT A_ID, A_DATA FROM TABLE}} where {{A_DATA}} is a 
> non key column. Assuming that the method is meant to return the estimated 
> number of bytes from the query projection the returned value of 0 is 
> incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-06 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3797.

Resolution: Fixed

> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch, 
> PHOENIX-3797_v3.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-06 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16039580#comment-16039580
 ] 

Ankit Singhal commented on PHOENIX-3797:


Thanks [~lhofhansl] and [~jamestaylor] for the review and ideas. created 
PHOENIX-3916 for the follow-on work.
Committed to master and 4.x branches.

> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch, 
> PHOENIX-3797_v3.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3916) Repair local index from offline tool

2017-06-06 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3916:
--

 Summary: Repair local index from offline tool 
 Key: PHOENIX-3916
 URL: https://issues.apache.org/jira/browse/PHOENIX-3916
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal


Create an offline tool to repair local index (follow on of PHOENIX-3797) which 
can be run after every hbck run to ensure that local indices are consistent and 
repair them if they get corrupted due to merge of overlapping regions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-05 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037323#comment-16037323
 ] 

Ankit Singhal edited comment on PHOENIX-3797 at 6/5/17 6:27 PM:


bq. Can we optionally disable the index? I'd be somewhat concerned about 
performance during compactions.
We can but if the local index spans in multiple column family/stores then I'll 
not be sure when I can enable the indexes as compaction runs independently for 
each store.

bq. You'd also want to truncate all the column families for the local index.
Column families will be truncated(with an empty file) and re-written(from 
region path) as a part of compaction itself. so, we may not need to do it 
explicitly.

bq. Please let us know, Ankit Singhal, if you think the above is feasible for 
4.11.0. If not, please commit your v3 patch.
As said above, without state management or client executing the repair 
operation, it will be difficult to change the state of index accurately. Let me 
commit this for 4.11.0 and we can work on adding "CHECK/REPAIR TABLE" 
functionality(with PHOENIX-3909) in a repair tool(ping [~sergey.soldatov]) 
which can be run after hbck to fix this 






was (Author: an...@apache.org):
bq. Can we optionally disable the index? I'd be somewhat concerned about 
performance during compactions.
We can but if the local index spans in multiple column family/stores then I'll 
not be sure when I can enable the indexes as compaction runs independently for 
each store.

bq. You'd also want to truncate all the column families for the local index.
Column families will be truncated(with an empty file) and re-written(from 
region path) as a part of compaction itself. so, we may not need to do it 
explicitly.

bq. Please let us know, Ankit Singhal, if you think the above is feasible for 
4.11.0. If not, please commit your v3 patch.
As said above, without state management or client executing the repair 
operation, it will be difficult to change the state of index accurately. Let me 
commit this for 4.11.0 and we can work on adding "CHECK/REPAIR TABLE" 
functionality in a repair tool(ping [~sergey.soldatov]) which can be run after 
hbck to fix this.





> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch, 
> PHOENIX-3797_v3.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThre

[jira] [Commented] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-05 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037323#comment-16037323
 ] 

Ankit Singhal commented on PHOENIX-3797:


bq. Can we optionally disable the index? I'd be somewhat concerned about 
performance during compactions.
We can but if the local index spans in multiple column family/stores then I'll 
not be sure when I can enable the indexes as compaction runs independently for 
each store.

bq. You'd also want to truncate all the column families for the local index.
Column families will be truncated(with an empty file) and re-written(from 
region path) as a part of compaction itself. so, we may not need to do it 
explicitly.

bq. Please let us know, Ankit Singhal, if you think the above is feasible for 
4.11.0. If not, please commit your v3 patch.
As said above, without state management or client executing the repair 
operation, it will be difficult to change the state of index accurately. Let me 
commit this for 4.11.0 and we can work on adding "CHECK/REPAIR TABLE" 
functionality in a repair tool(ping [~sergey.soldatov]) which can be run after 
hbck to fix this.





> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>        Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch, 
> PHOENIX-3797_v3.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-02 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16034641#comment-16034641
 ] 

Ankit Singhal commented on PHOENIX-3797:


[~jamestaylor], I'm not sure if we agreed on the approach yet so uploading 
revised patch(v3) to fix local index during compaction by writing it through 
the region. (as we can't attach LocalIndexStoreFileScanner to files as there is 
no way to identify the start key of the second region merged during hbck and 
this will even become more complicated if multiple regions get merged).

bq.Would also be nice if custom logic could be hooked into HBCK to address 
issues like this right there.
Yes [~lhofhansl], that would be great if it is possible([~devaraj] was also 
suggesting the same). Let me know if we can add a support in hbck to support 
custom hooks, I'll be happy update the patch.



> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch, 
> PHOENIX-3797_v3.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-02 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3797:
---
Attachment: PHOENIX-3797_v3.patch

> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch, 
> PHOENIX-3797_v3.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-01 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16032682#comment-16032682
 ] 

Ankit Singhal edited comment on PHOENIX-3797 at 6/1/17 9:27 AM:


bq. Actually... Can the same thing happen with just a regular merge operation 
from HBaseAdmin?
No, this will not happen with the regular merge operation from HBaseAdmin as 
there will be reference files created with split row as start key of 2nd 
region, so it is easy to detect start key of the store file which can be used 
to parse the rows of daughter region during the scan and re-write the complete 
data with new start key during compaction using LocalIndexStoreFileScanner 
(IndexHalfStoreFileReader) 

bq. Here's yet another idea: Can we hook a scanner right above the HFiles? That 
scanner would rewrite the keys based on the new region startkey. So now the 
store scanner for the index would do the right thing (merge sort between the 
values from the HFile scanners).
yes, If we can identify the start key of the 2nd region, we can make a use of 
LocalIndexStoreFileScanner. But with new region start key , we can't parse the 
local index data from store files of second region.


bq. So that(v2 approach) can work. For large regions that would lead to a lot 
of HFiles, though (for a 10g region with 256mb flush size it would lead to 40 
files after the major compaction).
Yes, but I think it will be same if we need to build the local index for the 
region again from client or something. 
Another problem of doing repair during compaction only, is that the data will 
be inconsistent for the queries until we find and fix them during compaction







was (Author: an...@apache.org):
bq. Actually... Can the same thing happen with just a regular merge operation 
from HBaseAdmin?
No, this will not happen with the regular merge operation from HBaseAdmin as 
there will be reference files created with split row as start key of 2nd 
region, so it is easy to detect start key of the store file which can be used 
to parse the rows of daughter region during the scan and re-write the complete 
data with new start key during compaction using LocalIndexStoreFileScanner 
(IndexHalfStoreFileReader) 

bq. Here's yet another idea: Can we hook a scanner right above the HFiles? That 
scanner would rewrite the keys based on the new region startkey. So now the 
store scanner for the index would do the right thing (merge sort between the 
values from the HFile scanners).
yes, If we can identify the start key of the 2nd region, we can make a use of 
LocalIndexStoreFileScanner. But with new region start key , we can't parse the 
local index data from store files of second region.






> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>        Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.

[jira] [Commented] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-01 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16032685#comment-16032685
 ] 

Ankit Singhal commented on PHOENIX-3797:


Another problem of doing repair during compaction only, is that the data will 
be inconsistent for the queries until we find and fix them during compaction. 


> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Issue Comment Deleted] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-01 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3797:
---
Comment: was deleted

(was: Another problem of doing repair during compaction only, is that the data 
will be inconsistent for the queries until we find and fix them during 
compaction. 
)

> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-06-01 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16032682#comment-16032682
 ] 

Ankit Singhal commented on PHOENIX-3797:


bq. Actually... Can the same thing happen with just a regular merge operation 
from HBaseAdmin?
No, this will not happen with the regular merge operation from HBaseAdmin as 
there will be reference files created with split row as start key of 2nd 
region, so it is easy to detect start key of the store file which can be used 
to parse the rows of daughter region during the scan and re-write the complete 
data with new start key during compaction using LocalIndexStoreFileScanner 
(IndexHalfStoreFileReader) 

bq. Here's yet another idea: Can we hook a scanner right above the HFiles? That 
scanner would rewrite the keys based on the new region startkey. So now the 
store scanner for the index would do the right thing (merge sort between the 
values from the HFile scanners).
yes, If we can identify the start key of the 2nd region, we can make a use of 
LocalIndexStoreFileScanner. But with new region start key , we can't parse the 
local index data from store files of second region.






> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3898) Empty result set after split with local index on multi-tenant table

2017-05-31 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3898:
---
Description: 
While testing encounters this(seems related to PHOENIX-3832):-
{code}
CREATE TABLE IF NOT EXISTS TM (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,PKP 
CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID CHAR(15), 
CREATED_BY_ID VARCHAR,FH VARCHAR, DT VARCHAR, OS VARCHAR, NS VARCHAR, OFN 
VARCHAR CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI ))  VERSIONS=1 
,MULTI_TENANT=true;
CREATE LOCAL INDEX IF NOT EXISTS TIDX ON TM (PKF, CRD, PKP, EHI);
{code}

{code}
0: jdbc:phoenix:localhost> select count(*) from tidx;
+---+
| COUNT(1)  |
+---+
| 30|
+---+
{code}
{code}
hbase(main):002:0> split 'TM'
{code}
{code}
0: jdbc:phoenix:localhost> select count(*) from tidx;
+---+
| COUNT(1)  |
+---+
| 0 |
+---+
{code}

  was:
While testing encounters this(seems related to PHOENIX-3832):-
{code}
CREATE TABLE IF NOT EXISTS TM (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,PKP 
CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID CHAR(15), 
CREATED_BY_ID VARCHAR,FH VARCHAR, DT VARCHAR, OS VARCHAR, NS VARCHAR, OFN 
VARCHAR CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI ))  VERSIONS=1 
,MULTI_TENANT=true;
CREATE LOCAL INDEX IF NOT EXISTS TIDX ON TM (PKF, CRD, PKP, EHI);
{code}

{code}
0: jdbc:phoenix:localhost> select count(*) from tidx;
+---+
| COUNT(1)  |
+---+
| 30|
+---+
{code}
{code}
hbase(main):002:0> split 'T
{code}
{code}
0: jdbc:phoenix:localhost> select count(*) from tidx;
+---+
| COUNT(1)  |
+---+
| 0 |
+---+
{code}


> Empty result set after split with local index on multi-tenant table
> ---
>
> Key: PHOENIX-3898
> URL: https://issues.apache.org/jira/browse/PHOENIX-3898
> Project: Phoenix
>      Issue Type: Bug
>Reporter: Ankit Singhal
> Fix For: 4.11.0
>
>
> While testing encounters this(seems related to PHOENIX-3832):-
> {code}
> CREATE TABLE IF NOT EXISTS TM (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT 
> NULL,PKP CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID 
> CHAR(15), CREATED_BY_ID VARCHAR,FH VARCHAR, DT VARCHAR, OS VARCHAR, NS 
> VARCHAR, OFN VARCHAR CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI 
> ))  VERSIONS=1 ,MULTI_TENANT=true;
> CREATE LOCAL INDEX IF NOT EXISTS TIDX ON TM (PKF, CRD, PKP, EHI);
> {code}
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from tidx;
> +---+
> | COUNT(1)  |
> +---+
> | 30|
> +---+
> {code}
> {code}
> hbase(main):002:0> split 'TM'
> {code}
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from tidx;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3898) Empty result set after split with local index on multi-tenant table

2017-05-31 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3898:
--

 Summary: Empty result set after split with local index on 
multi-tenant table
 Key: PHOENIX-3898
 URL: https://issues.apache.org/jira/browse/PHOENIX-3898
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
 Fix For: 4.11.0


While testing encounters this(seems related to PHOENIX-3832):-
{code}
CREATE TABLE IF NOT EXISTS TM (PKA CHAR(15) NOT NULL, PKF CHAR(3) NOT NULL,PKP 
CHAR(15) NOT NULL, CRD DATE NOT NULL, EHI CHAR(15) NOT NULL, FID CHAR(15), 
CREATED_BY_ID VARCHAR,FH VARCHAR, DT VARCHAR, OS VARCHAR, NS VARCHAR, OFN 
VARCHAR CONSTRAINT PK PRIMARY KEY ( PKA, PKF, PKP, CRD DESC, EHI ))  VERSIONS=1 
,MULTI_TENANT=true;
CREATE LOCAL INDEX IF NOT EXISTS TIDX ON TM (PKF, CRD, PKP, EHI);
{code}

{code}
0: jdbc:phoenix:localhost> select count(*) from tidx;
+---+
| COUNT(1)  |
+---+
| 30|
+---+
{code}
{code}
hbase(main):002:0> split 'T
{code}
{code}
0: jdbc:phoenix:localhost> select count(*) from tidx;
+---+
| COUNT(1)  |
+---+
| 0 |
+---+
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-05-30 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16030139#comment-16030139
 ] 

Ankit Singhal commented on PHOENIX-3797:


bq. It also seems fishy to apply the mutations in a method that is called 
get..., the passed localIndexResults is not used
Yes [~lhofhansl] , v2 patch was created at the last minute by just updating the 
required pieces to give an idea that there is an alternate approach to fix the 
index writes in repair scanner. 
Refactoring and comments will follow in subsequent patch. 

bq.Lastly it seems now that each time next is called we execute the batchmutate?
yes, I was planning to re-use the same batch size configuration.


> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-05-30 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16029614#comment-16029614
 ] 

Ankit Singhal edited comment on PHOENIX-3797 at 5/30/17 4:06 PM:
-

[~jamestaylor], Local Index data is in right sorted order. 

Compaction was failing because:-
We have a RepairScanner to handle the cases when two regions are merged during 
the hbck run(to repair overlaps or something), as these merges will be fine for 
data regions but can corrupt the data for local index(as we use start key of 
the region as suffix for local index to maintain the data within the region)
Due to the bug(basically the typo), the local index files are identified always 
inconsistent with respect to region boundaries resulting in this repair scanner 
to run every time and which is failing with above exception.(because we are 
creating index mutation from the data store and writing directly to local index 
hfiles which will never be sorted until we write through right path).

Attaching the fix for the same too, in case we want to pursue with this repair 
as well.




was (Author: an...@apache.org):
[~jamestaylor], Yes Local Index data is in right sorted order. 

Compaction was failing because :-
We have a RepairScanner to handle the cases when two regions are merged during 
the hbck run(to repair overlaps or something) ,as these merges will be fine for 
data regions but can corrupt the data for local index(as we use start key of 
the region as suffix for local index to maintain the data within the region)
Due to the bug(basically the typo), the local index files are indentified 
always inconsistent with respect to region boundaries resulting in this repair 
scanner to run everytime and which is failing with above exception.(because we 
are creating index mutation from data store and writing directly to local index 
hfiles).

Attaching the fix for the same too, in case we want to pursue with this repair 
as well.



> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-05-30 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3797:
---
Attachment: PHOENIX-3797_v2.patch

> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch, PHOENIX-3797_v2.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-05-30 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16029614#comment-16029614
 ] 

Ankit Singhal commented on PHOENIX-3797:


[~jamestaylor], Yes Local Index data is in right sorted order. 

Compaction was failing because :-
We have a RepairScanner to handle the cases when two regions are merged during 
the hbck run(to repair overlaps or something) ,as these merges will be fine for 
data regions but can corrupt the data for local index(as we use start key of 
the region as suffix for local index to maintain the data within the region)
Due to the bug(basically the typo), the local index files are indentified 
always inconsistent with respect to region boundaries resulting in this repair 
scanner to run everytime and which is failing with above exception.(because we 
are creating index mutation from data store and writing directly to local index 
hfiles).

Attaching the fix for the same too, in case we want to pursue with this repair 
as well.



> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3881) Support Arrays in phoenix-calcite

2017-05-30 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16029426#comment-16029426
 ] 

Ankit Singhal commented on PHOENIX-3881:


Current status(wip2 patch fixes ~165 test cases)

{code}
28/34 (ArrayAppendFunctionIT)
21/30 (ArrayConcatFunctionIT)
65/80 (ArrayIT) 
21/26 (ArrayFillFunctionIT) 
28/36 (ArrayToStringFunctionIT)
2/16  (ArrayWithNullIT)
{code}

> Support Arrays in phoenix-calcite
> -
>
> Key: PHOENIX-3881
> URL: https://issues.apache.org/jira/browse/PHOENIX-3881
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: calcite
> Attachments: PHOENIX-3881_wip_2.patch, PHOENIX-3881_wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3881) Support Arrays in phoenix-calcite

2017-05-30 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3881:
---
Attachment: PHOENIX-3881_wip_2.patch

> Support Arrays in phoenix-calcite
> -
>
> Key: PHOENIX-3881
> URL: https://issues.apache.org/jira/browse/PHOENIX-3881
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>  Labels: calcite
> Attachments: PHOENIX-3881_wip_2.patch, PHOENIX-3881_wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-05-30 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16029290#comment-16029290
 ] 

Ankit Singhal edited comment on PHOENIX-3797 at 5/30/17 1:05 PM:
-

Let's disable the repair scanner for now as it seems that we will never have 
index mutations in sorted order when they are built from files of corresponding 
data store during compaction. Attaching the patch for the same.


was (Author: an...@apache.org):
Let's disable the repair scanner for now as it may take a time to identify why 
index mutations from the data store are not built correctly during repair. 
Attaching the patch for the same.

> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-05-30 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3797:
---
Attachment: PHOENIX-3797.patch

> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3797.patch
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3797) Local Index - Compaction fails on table with local index due to non-increasing bloom keys

2017-05-30 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16029290#comment-16029290
 ] 

Ankit Singhal commented on PHOENIX-3797:


Let's disable the repair scanner for now as it may take a time to identify why 
index mutations from the data store are not built correctly during repair. 
Attaching the patch for the same.

> Local Index - Compaction fails on table with local index due to 
> non-increasing bloom keys
> -
>
> Key: PHOENIX-3797
> URL: https://issues.apache.org/jira/browse/PHOENIX-3797
> Project: Phoenix
>  Issue Type: Bug
> Environment: Head of 4.x-HBase-0.98 with PHOENIX-3796 patch applied. 
> HBase 0.98.23-hadoop2
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.11.0
>
>
> Compaction fails on table with local index.
> {noformat}
> 2017-04-19 16:37:56,521 ERROR 
> [RS:0;host:59455-smallCompactions-1492644947594] 
> regionserver.CompactSplitThread: Compaction failed Request = 
> regionName=FHA,00Dxx001gES005001xx03DGPd,1492644985470.92ec6436984981cdc8ef02388005a957.,
>  storeName=L#0, fileCount=3, fileSize=44.4 M (23.0 M, 10.7 M, 10.8 M), 
> priority=7, time=7442973347247614
> java.io.IOException: Non-increasing Bloom keys: 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01H+&\xA1(00Dxx001gER001001xx03DGPb01739544DCtf
> after 
> 00Dxx001gES005001xx03DGPd\x00\x00\x80\x00\x01I+\xF4\x9Ax00Dxx001gER001001xx03DGPa017115434KTM
>
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.appendGeneralBloomfilter(StoreFile.java:960)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:996)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:428)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:276)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:722)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3880) Is there any way to Move Phoenix table from one Hadoop Cluster to another (with Phoenix Metadata)

2017-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3880.

Resolution: Invalid

> Is there any way to Move Phoenix table from one Hadoop Cluster to another 
> (with Phoenix Metadata)
> -
>
> Key: PHOENIX-3880
> URL: https://issues.apache.org/jira/browse/PHOENIX-3880
> Project: Phoenix
>  Issue Type: Wish
>Affects Versions: 4.8.0
> Environment: PROD and UAT
>Reporter: Vijay Jayaraman
>Priority: Minor
>
> Can you please tell me how to move the Phoenix table with Data from one 
> Cluster to another Hadoop Cluster (Say from PROD to UAT).
> Note : we could copy the hbase data which is under 
> /hbase/default/data/$Table_Name$
> But we are not sure how to move the metadata so that it will be visible in 
> Phoenix.
> Thanks, 
> Vijay



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3880) Is there any way to Move Phoenix table from one Hadoop Cluster to another (with Phoenix Metadata)

2017-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022794#comment-16022794
 ] 

Ankit Singhal commented on PHOENIX-3880:


Please reach out to u...@phoenix.apache.org for guidance.

> Is there any way to Move Phoenix table from one Hadoop Cluster to another 
> (with Phoenix Metadata)
> -
>
> Key: PHOENIX-3880
> URL: https://issues.apache.org/jira/browse/PHOENIX-3880
> Project: Phoenix
>  Issue Type: Wish
>Affects Versions: 4.8.0
> Environment: PROD and UAT
>Reporter: Vijay Jayaraman
>Priority: Minor
>
> Can you please tell me how to move the Phoenix table with Data from one 
> Cluster to another Hadoop Cluster (Say from PROD to UAT).
> Note : we could copy the hbase data which is under 
> /hbase/default/data/$Table_Name$
> But we are not sure how to move the metadata so that it will be visible in 
> Phoenix.
> Thanks, 
> Vijay



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3881) Support Arrays in phoenix-calcite

2017-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3881:
---
Attachment: PHOENIX-3881_wip.patch

> Support Arrays in phoenix-calcite
> -
>
> Key: PHOENIX-3881
> URL: https://issues.apache.org/jira/browse/PHOENIX-3881
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>  Labels: calcite
> Attachments: PHOENIX-3881_wip.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3882) Support Array functions

2017-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3882:
---
Labels: calcite  (was: )

> Support Array functions 
> 
>
> Key: PHOENIX-3882
> URL: https://issues.apache.org/jira/browse/PHOENIX-3882
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>  Labels: calcite
>
> we need to infer return type of array function considering the data type of 
> arguments. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3882) Support Array functions

2017-05-24 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3882:
--

 Summary: Support Array functions 
 Key: PHOENIX-3882
 URL: https://issues.apache.org/jira/browse/PHOENIX-3882
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Ankit Singhal
Assignee: Ankit Singhal


we need to infer return type of array function considering the data type of 
arguments. 





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3502) Support ARRAY DML in Calcite-Phoenix

2017-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3502:
---
Issue Type: Sub-task  (was: Improvement)
Parent: PHOENIX-3881

> Support ARRAY DML in Calcite-Phoenix
> 
>
> Key: PHOENIX-3502
> URL: https://issues.apache.org/jira/browse/PHOENIX-3502
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Eric Lomore
>    Assignee: Ankit Singhal
>  Labels: calcite
>
> Array construction DML's don't currently work as expected.
> Initially it seemed like a type matching issue, but once I forced the correct 
> types, the validator still throws exceptions.
> Example input query:
> {code}String ddl = "CREATE TABLE " + tableName + " (region_name VARCHAR 
> PRIMARY KEY,varchars CHAR(5)[],integers INTEGER[],doubles DOUBLE[],bigints 
> BIGINT[],chars CHAR(15)[],double1 DOUBLE,char1 CHAR(17),nullcheck 
> INTEGER,chars2 CHAR(15)[])";
> String dml = "UPSERT INTO " + tableName + 
> "(region_name,varchars,integers,doubles,bigints,chars,double1,char1,nullcheck,chars2)
>  VALUES('SF Bay Area'," +
> "ARRAY['2345','46345','23234']," +
> "ARRAY[2345,46345,23234,456]," +
> "ARRAY[23.45,46.345,23.234,45.6,5.78]," +
> "ARRAY[12,34,56,78,910]," +
> "ARRAY['a','','c','ddd','e']," +
> "23.45," +
> "'wert'," +
> "NULL," +
> "ARRAY['a','','c','ddd','e','foo']" +
> ")";
> {code}
> Exception thrown:
> {code}
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Cannot 
> assign to target field 'VARCHARS' of type CHAR(5) ARRAY from source field 
> 'EXPR$1' of type CHAR(5) ARRAY
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:405)
>   at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:514)
>   ... 53 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3502) Support ARRAY DML in Calcite-Phoenix

2017-05-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16022717#comment-16022717
 ] 

Ankit Singhal commented on PHOENIX-3502:


This is blocked on CALCITE-1804.

> Support ARRAY DML in Calcite-Phoenix
> 
>
> Key: PHOENIX-3502
> URL: https://issues.apache.org/jira/browse/PHOENIX-3502
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Eric Lomore
>Assignee: Ankit Singhal
>  Labels: calcite
>
> Array construction DML's don't currently work as expected.
> Initially it seemed like a type matching issue, but once I forced the correct 
> types, the validator still throws exceptions.
> Example input query:
> {code}String ddl = "CREATE TABLE " + tableName + " (region_name VARCHAR 
> PRIMARY KEY,varchars CHAR(5)[],integers INTEGER[],doubles DOUBLE[],bigints 
> BIGINT[],chars CHAR(15)[],double1 DOUBLE,char1 CHAR(17),nullcheck 
> INTEGER,chars2 CHAR(15)[])";
> String dml = "UPSERT INTO " + tableName + 
> "(region_name,varchars,integers,doubles,bigints,chars,double1,char1,nullcheck,chars2)
>  VALUES('SF Bay Area'," +
> "ARRAY['2345','46345','23234']," +
> "ARRAY[2345,46345,23234,456]," +
> "ARRAY[23.45,46.345,23.234,45.6,5.78]," +
> "ARRAY[12,34,56,78,910]," +
> "ARRAY['a','','c','ddd','e']," +
> "23.45," +
> "'wert'," +
> "NULL," +
> "ARRAY['a','','c','ddd','e','foo']" +
> ")";
> {code}
> Exception thrown:
> {code}
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Cannot 
> assign to target field 'VARCHARS' of type CHAR(5) ARRAY from source field 
> 'EXPR$1' of type CHAR(5) ARRAY
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:405)
>   at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:514)
>   ... 53 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3881) Support Arrays in phoenix-calcite

2017-05-24 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3881:
--

 Summary: Support Arrays in phoenix-calcite
 Key: PHOENIX-3881
 URL: https://issues.apache.org/jira/browse/PHOENIX-3881
 Project: Phoenix
  Issue Type: Improvement
Reporter: Ankit Singhal
Assignee: Ankit Singhal






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3502) Support ARRAY DML in Calcite-Phoenix

2017-05-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-3502:
--

Assignee: Ankit Singhal

> Support ARRAY DML in Calcite-Phoenix
> 
>
> Key: PHOENIX-3502
> URL: https://issues.apache.org/jira/browse/PHOENIX-3502
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Eric Lomore
>    Assignee: Ankit Singhal
>  Labels: calcite
>
> Array construction DML's don't currently work as expected.
> Initially it seemed like a type matching issue, but once I forced the correct 
> types, the validator still throws exceptions.
> Example input query:
> {code}String ddl = "CREATE TABLE " + tableName + " (region_name VARCHAR 
> PRIMARY KEY,varchars CHAR(5)[],integers INTEGER[],doubles DOUBLE[],bigints 
> BIGINT[],chars CHAR(15)[],double1 DOUBLE,char1 CHAR(17),nullcheck 
> INTEGER,chars2 CHAR(15)[])";
> String dml = "UPSERT INTO " + tableName + 
> "(region_name,varchars,integers,doubles,bigints,chars,double1,char1,nullcheck,chars2)
>  VALUES('SF Bay Area'," +
> "ARRAY['2345','46345','23234']," +
> "ARRAY[2345,46345,23234,456]," +
> "ARRAY[23.45,46.345,23.234,45.6,5.78]," +
> "ARRAY[12,34,56,78,910]," +
> "ARRAY['a','','c','ddd','e']," +
> "23.45," +
> "'wert'," +
> "NULL," +
> "ARRAY['a','','c','ddd','e','foo']" +
> ")";
> {code}
> Exception thrown:
> {code}
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: Cannot 
> assign to target field 'VARCHARS' of type CHAR(5) ARRAY from source field 
> 'EXPR$1' of type CHAR(5) ARRAY
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:405)
>   at org.apache.calcite.runtime.Resources$ExInst.ex(Resources.java:514)
>   ... 53 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3353) Paged Queries: use of LIMIT + OFFSET with ORDER BY

2017-05-18 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3353.

Resolution: Duplicate

> Paged Queries: use of LIMIT + OFFSET with ORDER BY
> --
>
> Key: PHOENIX-3353
> URL: https://issues.apache.org/jira/browse/PHOENIX-3353
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0, 4.8.1
> Environment: Phoenix 4.8.1 Hbase 1.2
>Reporter: Jasper van Ams
>  Labels: limit, offset, orderby, paging
>
> As per documentation (https://phoenix.apache.org/paged.html)
> SELECT * FROM FOO LIMIT 10 OFFSET 10
> returns rows 11 to 20.
> However when adding ORDER BY:
> SELECT * FROM FOO ORDER BY BAR LIMIT 10 OFFSET 10 
> it returns nothing. Only raising the LIMIT with the appropriate OFFSET i.e.
> SELECT * FROM FOO ORDER BY BAR LIMIT 20 OFFSET 10 
> will now return rows 11 to 20
> while
> SELECT * FROM FOO LIMIT 20 OFFSET 10 
> returns rows 11 to 30
> In short: LIMIT + OFFSET in combo with ORDER BY on non primary key has 
> unexpected returns.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3800) NPE when doing UPSERT SELECT into salted tables

2017-05-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997829#comment-15997829
 ] 

Ankit Singhal commented on PHOENIX-3800:


committed to master and 4.x branches.
Sorry [~sato_eiichi], I missed adding your name in the commit message for all 
branches except 4.x-HBase-0.98.

> NPE when doing UPSERT SELECT into salted tables
> ---
>
> Key: PHOENIX-3800
> URL: https://issues.apache.org/jira/browse/PHOENIX-3800
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Eiichi Sato
>Assignee: Eiichi Sato
> Attachments: PHOENIX-3800.patch
>
>
> We run into NPE when doing UPSERT SELECT into salted tables, client and 
> server both running 4.10.0 release. Here is a minimal reproducer and the 
> stack trace on the client side.
> {code}
> create table test (id varchar not null primary key, a integer, b integer) 
> salt_buckets = 2;
> upsert into test (id, b) select id, 1 from test;
> {code}
> {code}
> java.lang.NullPointerException: at index 2
> at 
> com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
> at 
> com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
> at 
> com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:534)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:408)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:297)
> at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:684)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:611)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:597)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:351)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:341)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:339)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1511)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:813)
> at sqlline.SqlLine.begin(SqlLine.java:686)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:291)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3800) NPE when doing UPSERT SELECT into salted tables

2017-05-04 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-3800:
--

Assignee: Eiichi Sato

> NPE when doing UPSERT SELECT into salted tables
> ---
>
> Key: PHOENIX-3800
> URL: https://issues.apache.org/jira/browse/PHOENIX-3800
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Eiichi Sato
>Assignee: Eiichi Sato
> Attachments: PHOENIX-3800.patch
>
>
> We run into NPE when doing UPSERT SELECT into salted tables, client and 
> server both running 4.10.0 release. Here is a minimal reproducer and the 
> stack trace on the client side.
> {code}
> create table test (id varchar not null primary key, a integer, b integer) 
> salt_buckets = 2;
> upsert into test (id, b) select id, 1 from test;
> {code}
> {code}
> java.lang.NullPointerException: at index 2
> at 
> com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
> at 
> com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
> at 
> com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:534)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:408)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:297)
> at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:684)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:611)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:597)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:351)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:341)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:339)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1511)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:813)
> at sqlline.SqlLine.begin(SqlLine.java:686)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:291)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3800) NPE when doing UPSERT SELECT into salted tables

2017-05-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15997820#comment-15997820
 ] 

Ankit Singhal commented on PHOENIX-3800:


+1, Thanks [~sato_eiichi] for the fix. 
Let me commit this.

> NPE when doing UPSERT SELECT into salted tables
> ---
>
> Key: PHOENIX-3800
> URL: https://issues.apache.org/jira/browse/PHOENIX-3800
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Eiichi Sato
> Attachments: PHOENIX-3800.patch
>
>
> We run into NPE when doing UPSERT SELECT into salted tables, client and 
> server both running 4.10.0 release. Here is a minimal reproducer and the 
> stack trace on the client side.
> {code}
> create table test (id varchar not null primary key, a integer, b integer) 
> salt_buckets = 2;
> upsert into test (id, b) select id, 1 from test;
> {code}
> {code}
> java.lang.NullPointerException: at index 2
> at 
> com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
> at 
> com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
> at 
> com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:534)
> at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:408)
> at 
> org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:297)
> at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:684)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:611)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:597)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:351)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:341)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:339)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1511)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:813)
> at sqlline.SqlLine.begin(SqlLine.java:686)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:291)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3816) Implement SET_OPTION for consistency in phoenix-calcite

2017-05-02 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3816.

Resolution: Fixed

> Implement SET_OPTION for consistency in phoenix-calcite
> ---
>
> Key: PHOENIX-3816
> URL: https://issues.apache.org/jira/browse/PHOENIX-3816
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>  Labels: calcite
> Attachments: PHOENIX-3816.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3809) CURRENT_DATE() (with parentheses) is illegal in Calcite

2017-05-02 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3809.

Resolution: Fixed

> CURRENT_DATE() (with parentheses) is illegal in Calcite
> ---
>
> Key: PHOENIX-3809
> URL: https://issues.apache.org/jira/browse/PHOENIX-3809
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>  Labels: calcite
>
> Calcite doesn't allow system functions to be specified with parenthesis who 
> doesn't accept arguments.
> For eg:-
> CURRENT_DATE() is illegal whereas CURRENT_DATE is expected.
> At validation level:- SqlValidatorImpl#validateCall 
> {code}
> if ((call.operandCount() == 0)
> && (operator.getSyntax() == SqlSyntax.FUNCTION_ID)
> && !call.isExpanded()) {
>   // For example, "LOCALTIME()" is illegal. (It should be
>   // "LOCALTIME", which would have been handled as a
>   // SqlIdentifier.)
>   throw handleUnresolvedFunction(call, (SqlFunction) operator,
>   ImmutableList.of(), null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3809) CURRENT_DATE() (with parentheses) is illegal in Calcite

2017-05-02 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15992461#comment-15992461
 ] 

Ankit Singhal commented on PHOENIX-3809:


Thanks [~julianhyde] and [~jamestaylor].


> CURRENT_DATE() (with parentheses) is illegal in Calcite
> ---
>
> Key: PHOENIX-3809
> URL: https://issues.apache.org/jira/browse/PHOENIX-3809
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: calcite
>
> Calcite doesn't allow system functions to be specified with parenthesis who 
> doesn't accept arguments.
> For eg:-
> CURRENT_DATE() is illegal whereas CURRENT_DATE is expected.
> At validation level:- SqlValidatorImpl#validateCall 
> {code}
> if ((call.operandCount() == 0)
> && (operator.getSyntax() == SqlSyntax.FUNCTION_ID)
> && !call.isExpanded()) {
>   // For example, "LOCALTIME()" is illegal. (It should be
>   // "LOCALTIME", which would have been handled as a
>   // SqlIdentifier.)
>   throw handleUnresolvedFunction(call, (SqlFunction) operator,
>   ImmutableList.of(), null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3816) Implement SET_OPTION for consistency in phoenix-calcite

2017-04-28 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3816:
---
Summary: Implement SET_OPTION for consistency in phoenix-calcite  (was: 
Implement SET_OPTION for consistency)

> Implement SET_OPTION for consistency in phoenix-calcite
> ---
>
> Key: PHOENIX-3816
> URL: https://issues.apache.org/jira/browse/PHOENIX-3816
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
>  Labels: calcite
> Attachments: PHOENIX-3816.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3816) Implement SET_OPTION for consistency

2017-04-28 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3816:
---
Attachment: PHOENIX-3816.patch

-- patch that added the functionality to set ALTER SESSION set 
consistency='timeline'
-- explain table test will fail until we fix PHOENIX-3104.

> Implement SET_OPTION for consistency
> 
>
> Key: PHOENIX-3816
> URL: https://issues.apache.org/jira/browse/PHOENIX-3816
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: calcite
> Attachments: PHOENIX-3816.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3816) Implement SET_OPTION for consistency

2017-04-28 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3816:
--

 Summary: Implement SET_OPTION for consistency
 Key: PHOENIX-3816
 URL: https://issues.apache.org/jira/browse/PHOENIX-3816
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3682) CURRENT_DATE and CURRENT_TIME cannot be resolved in Phoenix-Calcite

2017-04-27 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3682.

Resolution: Duplicate

Oops, missed this [~rajesh23], As discussion logged on PHOENIX-3809 so closing 
this one as a duplicate.

> CURRENT_DATE and CURRENT_TIME cannot be resolved in Phoenix-Calcite
> ---
>
> Key: PHOENIX-3682
> URL: https://issues.apache.org/jira/browse/PHOENIX-3682
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>  Labels: calcite
>
> {noformat}
> testCurrentTimeWithProjectedTable(org.apache.phoenix.end2end.DateTimeIT)  
> Time elapsed: 7.013 sec  <<< ERROR!
> java.sql.SQLException: Error while executing SQL "select /*+ 
> USE_SORT_MERGE_JOIN */ op.id, current_time() from T14 op where op.id in 
> (select id from T15)": From line 1, column 42 to line 1, column 55: No 
> match found for function signature CURRENT_TIME()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentTimeWithProjectedTable(DateTimeIT.java:786)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 42 to line 1, column 55: No match found for function signature 
> CURRENT_TIME()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentTimeWithProjectedTable(DateTimeIT.java:786)
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: No match 
> found for function signature CURRENT_TIME()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentTimeWithProjectedTable(DateTimeIT.java:786)
> {noformat}
> {noformat}
> testCurrentDateWithNoTable(org.apache.phoenix.end2end.DateTimeIT)  Time 
> elapsed: 2.367 sec  <<< ERROR!
> java.sql.SQLException: Error while executing SQL "SELECT CURRENT_DATE()": 
> From line 1, column 8 to line 1, column 21: No match found for function 
> signature CURRENT_DATE()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentDateWithNoTable(DateTimeIT.java:744)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 21: No match found for function signature 
> CURRENT_DATE()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentDateWithNoTable(DateTimeIT.java:744)
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: No match 
> found for function signature CURRENT_DATE()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentDateWithNoTable(DateTimeIT.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3682) CURRENT_DATE and CURRENT_TIME cannot be resolved in Phoenix-Calcite

2017-04-27 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-3682:
--

Assignee: (was: Rajeshbabu Chintaguntla)

> CURRENT_DATE and CURRENT_TIME cannot be resolved in Phoenix-Calcite
> ---
>
> Key: PHOENIX-3682
> URL: https://issues.apache.org/jira/browse/PHOENIX-3682
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>  Labels: calcite
>
> {noformat}
> testCurrentTimeWithProjectedTable(org.apache.phoenix.end2end.DateTimeIT)  
> Time elapsed: 7.013 sec  <<< ERROR!
> java.sql.SQLException: Error while executing SQL "select /*+ 
> USE_SORT_MERGE_JOIN */ op.id, current_time() from T14 op where op.id in 
> (select id from T15)": From line 1, column 42 to line 1, column 55: No 
> match found for function signature CURRENT_TIME()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentTimeWithProjectedTable(DateTimeIT.java:786)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 42 to line 1, column 55: No match found for function signature 
> CURRENT_TIME()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentTimeWithProjectedTable(DateTimeIT.java:786)
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: No match 
> found for function signature CURRENT_TIME()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentTimeWithProjectedTable(DateTimeIT.java:786)
> {noformat}
> {noformat}
> testCurrentDateWithNoTable(org.apache.phoenix.end2end.DateTimeIT)  Time 
> elapsed: 2.367 sec  <<< ERROR!
> java.sql.SQLException: Error while executing SQL "SELECT CURRENT_DATE()": 
> From line 1, column 8 to line 1, column 21: No match found for function 
> signature CURRENT_DATE()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentDateWithNoTable(DateTimeIT.java:744)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 1, 
> column 8 to line 1, column 21: No match found for function signature 
> CURRENT_DATE()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentDateWithNoTable(DateTimeIT.java:744)
> Caused by: org.apache.calcite.sql.validate.SqlValidatorException: No match 
> found for function signature CURRENT_DATE()
>   at 
> org.apache.phoenix.end2end.DateTimeIT.testCurrentDateWithNoTable(DateTimeIT.java:744)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3809) CURRENT_DATE() (with parentheses) is illegal in Calcite

2017-04-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988283#comment-15988283
 ] 

Ankit Singhal commented on PHOENIX-3809:


bq. I think having a version that allows parentheses would ease the transition 
from 4.x to 5.0, so supporting it would be useful.
Raised pull request for the same. 
https://github.com/apache/calcite/pull/438

though it's not a standard SQL but CURRENT_DATE (with and without parentheses) 
is also allowed in Mysql, so IMO continuing supporting at our side would not be 
that bad.


> CURRENT_DATE() (with parentheses) is illegal in Calcite
> ---
>
> Key: PHOENIX-3809
> URL: https://issues.apache.org/jira/browse/PHOENIX-3809
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: calcite
>
> Calcite doesn't allow system functions to be specified with parenthesis who 
> doesn't accept arguments.
> For eg:-
> CURRENT_DATE() is illegal whereas CURRENT_DATE is expected.
> At validation level:- SqlValidatorImpl#validateCall 
> {code}
> if ((call.operandCount() == 0)
> && (operator.getSyntax() == SqlSyntax.FUNCTION_ID)
> && !call.isExpanded()) {
>   // For example, "LOCALTIME()" is illegal. (It should be
>   // "LOCALTIME", which would have been handled as a
>   // SqlIdentifier.)
>   throw handleUnresolvedFunction(call, (SqlFunction) operator,
>   ImmutableList.of(), null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3814) Unable to connect to Phoenix via Spark

2017-04-27 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3814.

Resolution: Duplicate

> Unable to connect to Phoenix via Spark
> --
>
> Key: PHOENIX-3814
> URL: https://issues.apache.org/jira/browse/PHOENIX-3814
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: Ubuntu 16.04.1, Apache Spark 2.1.0, Hbase 1.2.5, Phoenix 
> 4.10.0
>Reporter: Wajid Khattak
>
> Please see 
> http://stackoverflow.com/questions/43640864/apache-phoenix-for-spark-not-working



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3814) Unable to connect to Phoenix via Spark

2017-04-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15987205#comment-15987205
 ] 

Ankit Singhal commented on PHOENIX-3814:


Duplicate of PHOENIX-3721.

you may try including hbase-client jar in your spark classpath.

> Unable to connect to Phoenix via Spark
> --
>
> Key: PHOENIX-3814
> URL: https://issues.apache.org/jira/browse/PHOENIX-3814
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: Ubuntu 16.04.1, Apache Spark 2.1.0, Hbase 1.2.5, Phoenix 
> 4.10.0
>Reporter: Wajid Khattak
>
> Please see 
> http://stackoverflow.com/questions/43640864/apache-phoenix-for-spark-not-working



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3710) Cannot use lowername data table name with indextool

2017-04-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982460#comment-15982460
 ] 

Ankit Singhal commented on PHOENIX-3710:


Thanks [~sergey.soldatov], changes look good. But at least to avoid regression, 
if we can include below two test cases as 
IT(IndexExtendedIT.testSecondaryIndex() can be reused for this) 
* IndexTool with both tablename and indexname in lowercase 
* To check whether the normalisation is correct for phoenix tableName of type 
\"S:T\"





> Cannot use lowername data table name with indextool
> ---
>
> Key: PHOENIX-3710
> URL: https://issues.apache.org/jira/browse/PHOENIX-3710
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Matthew Shipton
>Assignee: Sergey Soldatov
>Priority: Minor
> Attachments: PHOENIX-3710.patch, test.sh, test.sql
>
>
> {code}
> hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
> \"my_lowcase_table\" --index-table INDEX_TABLE --output-path /tmp/some_path
> {code}
> results in:
> {code}
> java.lang.IllegalArgumentException:  INDEX_TABLE is not an index table for 
> MY_LOWCASE_TABLE
> {code}
> This is despite the data table being explictly lowercased.
> Appears to be referring to the lowcase table, not the uppercase version.
> Workaround exists by changing the tablename, but this is not always feasible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3809) CURRENT_DATE() (with paranthesis) is illegal in calcite

2017-04-24 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15981065#comment-15981065
 ] 

Ankit Singhal commented on PHOENIX-3809:


[~jamestaylor]/[~julianhyde], what can we do here? 


> CURRENT_DATE() (with paranthesis) is illegal in calcite
> ---
>
> Key: PHOENIX-3809
> URL: https://issues.apache.org/jira/browse/PHOENIX-3809
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: calcite
>
> Calcite doesn't allow system functions to be specified with parenthesis who 
> doesn't accept arguments.
> For eg:-
> CURRENT_DATE() is illegal whereas CURRENT_DATE is expected.
> At validation level:- SqlValidatorImpl#validateCall 
> {code}
> if ((call.operandCount() == 0)
> && (operator.getSyntax() == SqlSyntax.FUNCTION_ID)
> && !call.isExpanded()) {
>   // For example, "LOCALTIME()" is illegal. (It should be
>   // "LOCALTIME", which would have been handled as a
>   // SqlIdentifier.)
>   throw handleUnresolvedFunction(call, (SqlFunction) operator,
>   ImmutableList.of(), null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3809) CURRENT_DATE() (with paranthesis) is illegal in calcite

2017-04-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3809:
---
Description: 
Calcite doesn't allow system functions to be specified with parenthesis who 
doesn't accept arguments.

For eg:-
CURRENT_DATE() is illegal whereas CURRENT_DATE is expected.

At validation level:- SqlValidatorImpl#validateCall 
{code}
if ((call.operandCount() == 0)
&& (operator.getSyntax() == SqlSyntax.FUNCTION_ID)
&& !call.isExpanded()) {
  // For example, "LOCALTIME()" is illegal. (It should be
  // "LOCALTIME", which would have been handled as a
  // SqlIdentifier.)
  throw handleUnresolvedFunction(call, (SqlFunction) operator,
  ImmutableList.of(), null);
}
{code}

  was:
Calcite doesn't allow system functions to be specified with parenthesis who 
doesn't accept arguments.

For eg:-
CURRENT_DATE() is illegal whereas CURRENT_DATE is expected.

SqlValidatorImpl#validateCall
{code}
if ((call.operandCount() == 0)
&& (operator.getSyntax() == SqlSyntax.FUNCTION_ID)
&& !call.isExpanded()) {
  // For example, "LOCALTIME()" is illegal. (It should be
  // "LOCALTIME", which would have been handled as a
  // SqlIdentifier.)
  throw handleUnresolvedFunction(call, (SqlFunction) operator,
  ImmutableList.of(), null);
}
{code}


> CURRENT_DATE() (with paranthesis) is illegal in calcite
> ---
>
> Key: PHOENIX-3809
> URL: https://issues.apache.org/jira/browse/PHOENIX-3809
>     Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: calcite
>
> Calcite doesn't allow system functions to be specified with parenthesis who 
> doesn't accept arguments.
> For eg:-
> CURRENT_DATE() is illegal whereas CURRENT_DATE is expected.
> At validation level:- SqlValidatorImpl#validateCall 
> {code}
> if ((call.operandCount() == 0)
> && (operator.getSyntax() == SqlSyntax.FUNCTION_ID)
> && !call.isExpanded()) {
>   // For example, "LOCALTIME()" is illegal. (It should be
>   // "LOCALTIME", which would have been handled as a
>   // SqlIdentifier.)
>   throw handleUnresolvedFunction(call, (SqlFunction) operator,
>   ImmutableList.of(), null);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3809) CURRENT_DATE() (with paranthesis) is illegal in calcite

2017-04-24 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3809:
--

 Summary: CURRENT_DATE() (with paranthesis) is illegal in calcite
 Key: PHOENIX-3809
 URL: https://issues.apache.org/jira/browse/PHOENIX-3809
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal


Calcite doesn't allow system functions to be specified with parenthesis who 
doesn't accept arguments.

For eg:-
CURRENT_DATE() is illegal whereas CURRENT_DATE is expected.

SqlValidatorImpl#validateCall
{code}
if ((call.operandCount() == 0)
&& (operator.getSyntax() == SqlSyntax.FUNCTION_ID)
&& !call.isExpanded()) {
  // For example, "LOCALTIME()" is illegal. (It should be
  // "LOCALTIME", which would have been handled as a
  // SqlIdentifier.)
  throw handleUnresolvedFunction(call, (SqlFunction) operator,
  ImmutableList.of(), null);
}
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3751) spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException

2017-04-20 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3751.

Resolution: Fixed

Thanks [~jmahonin], committed to master and 4.x branches.

> spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException
> -
>
> Key: PHOENIX-3751
> URL: https://issues.apache.org/jira/browse/PHOENIX-3751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase 1.14
> spark 2.10
> phoenix: 4.10
>Reporter: Nan Xu
>Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3751.patch
>
>
> create phoenix table: 
> create table phoenix.quote (  
> sym Varchar not null, 
> src varchar,
> kdbPublishTime time not null,
> location varchar,
> bid DOUBLE,
> ask DOUBLE,
> bsize unsigned_int,
> asize unsigned_int, 
> srcTime time,
> layer varchar,
> expiryTime time,
> quoteId varchar,
> recvTime time, 
> distTime  time,
> "TIME" time
> CONSTRAINT quote_pk PRIMARY KEY (sym, src, kdbPublishTime)) 
> COMPRESSION='SNAPPY', DATA_BLOCK_ENCODING='FAST_DIFF', VERSIONS=1000
> insert data:
> SYM   SRC KDBPUBLISHTIME  LOCATIONBID ASK BSIZE   ASIZE   
> SRCTIMELAYEREXPIRYTIME  QUOTEID   RECVTIME 
> DISTTIME TIME
> 6AH7  cme103:42:59N  0.7471   0.7506  20  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> 6AH7  cme103:42:59N  0.7474   0.7506  25  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> val spark = SparkSession
> .builder()
> .appName("load_avro")
> .master("local[1]")
> .config("spark.sql.warehouse.dir", "file:/tmp/spark-warehouse")
> .getOrCreate()
>  val df = spark.sqlContext.phoenixTableAsDataFrame("PHOENIX.QUOTE", 
> Seq("SYM","SRC", "EXPIRYTIME"), zkUrl = Some("a1.cluster:2181"))
>   df.show(100)
> problem is in PhoenixRDD:140
>  val rowSeq = columns.map { case (name, sqlType) =>
> val res = pr.resultMap(name)
>   // Special handling for data types
>   if (dateAsTimestamp && (sqlType == 91 || sqlType == 19)) { // 91 is 
> the defined type for Date and 19 for UNSIGNED_DATE
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
>   } else if (sqlType == 92 || sqlType == 18) { // 92 is the defined 
> type for Time and 18 for UNSIGNED_TIME
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
>   } else {
> res
>   }
>   }
> res.asInstanceOf[java.sql.Time].getTime could be null and get NPE.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3792) Provide way to skip normalization of column names in phoenix-spark integration

2017-04-20 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3792.

Resolution: Fixed

Thanks [~jmahonin], committed to master and 4.x branches.

> Provide way to skip normalization of column names in phoenix-spark integration
> --
>
> Key: PHOENIX-3792
> URL: https://issues.apache.org/jira/browse/PHOENIX-3792
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3792.patch, PHOENIX-3792_v1.patch
>
>
> If the user is reading an AVRO file and writing to a Phoenix table with case 
> sensitive column names, then we should provide the user with an option to 
> skip the normalisation as it seems there is no way to escape double quotes 
> for the column names in Avro schema.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3759) Dropping a local index causes NPE

2017-04-20 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3759.

Resolution: Fixed

Thanks [~lhofhansl], committed to master and 4.x branches.

> Dropping a local index causes NPE
> -
>
> Key: PHOENIX-3759
> URL: https://issues.apache.org/jira/browse/PHOENIX-3759
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>    Assignee: Ankit Singhal
> Attachments: PHOENIX-3759.patch
>
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.KeyValue.createKeyValueFromKey(KeyValue.java:2773)
>   at 
> org.apache.phoenix.util.RepairUtil.isLocalIndexStoreFilesConsistent(RepairUtil.java:32)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preCompactScannerOpen(IndexHalfStoreFileReaderGenerator.java:197)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$6.call(RegionCoprocessorHost.java:521)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1660)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompactScannerOpen(RegionCoprocessorHost.java:516)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.preCreateCoprocScanner(Compactor.java:325)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.preCreateCoprocScanner(Compactor.java:315)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:266)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at java.util.concurrent.ThreadPoolExecutor$Work
> {noformat}
> Region server gets aborted due to this NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3759) Dropping a local index causes NPE

2017-04-20 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3759:
---
Fix Version/s: 4.11.0

> Dropping a local index causes NPE
> -
>
> Key: PHOENIX-3759
> URL: https://issues.apache.org/jira/browse/PHOENIX-3759
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>    Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3759.patch
>
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.KeyValue.createKeyValueFromKey(KeyValue.java:2773)
>   at 
> org.apache.phoenix.util.RepairUtil.isLocalIndexStoreFilesConsistent(RepairUtil.java:32)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preCompactScannerOpen(IndexHalfStoreFileReaderGenerator.java:197)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$6.call(RegionCoprocessorHost.java:521)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1660)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompactScannerOpen(RegionCoprocessorHost.java:516)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.preCreateCoprocScanner(Compactor.java:325)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.preCreateCoprocScanner(Compactor.java:315)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:266)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at java.util.concurrent.ThreadPoolExecutor$Work
> {noformat}
> Region server gets aborted due to this NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3792) Provide way to skip normalization of column names in phoenix-spark integration

2017-04-20 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15976709#comment-15976709
 ] 

Ankit Singhal edited comment on PHOENIX-3792 at 4/20/17 1:48 PM:
-

Thanks [~jmahonin] for the review. Please find the updated patch.
bq. We should replace 2.11 above with ${scala.binary.version}
I think it's ok to test "skipNormalizingIdentifier" option without Avro for now.

bq. Also, I +1'd PHOENIX-3751 earlier, though perhaps you're waiting for user 
feedback.
it got skipped from my radar, will commit along with this. 


was (Author: an...@apache.org):
Thanks [~jmahonin] for the review. Please find the updated patch.
bq. We should replace 2.11 above with ${scala.binary.version}
I think it's to test "skipNormalizingIdentifier" option without Avro for now.

bq. Also, I +1'd PHOENIX-3751 earlier, though perhaps you're waiting for user 
feedback.
it got skipped from my radar, will commit along with this.

> Provide way to skip normalization of column names in phoenix-spark integration
> --
>
> Key: PHOENIX-3792
> URL: https://issues.apache.org/jira/browse/PHOENIX-3792
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3792.patch, PHOENIX-3792_v1.patch
>
>
> If the user is reading an AVRO file and writing to a Phoenix table with case 
> sensitive column names, then we should provide the user with an option to 
> skip the normalisation as it seems there is no way to escape double quotes 
> for the column names in Avro schema.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3792) Provide way to skip normalization of column names in phoenix-spark integration

2017-04-20 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3792:
---
Attachment: PHOENIX-3792_v1.patch

> Provide way to skip normalization of column names in phoenix-spark integration
> --
>
> Key: PHOENIX-3792
> URL: https://issues.apache.org/jira/browse/PHOENIX-3792
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3792.patch, PHOENIX-3792_v1.patch
>
>
> If the user is reading an AVRO file and writing to a Phoenix table with case 
> sensitive column names, then we should provide the user with an option to 
> skip the normalisation as it seems there is no way to escape double quotes 
> for the column names in Avro schema.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3792) Provide way to skip normalization of column names in phoenix-spark integration

2017-04-20 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15976709#comment-15976709
 ] 

Ankit Singhal commented on PHOENIX-3792:


Thanks [~jmahonin] for the review. Please find the updated patch.
bq. We should replace 2.11 above with ${scala.binary.version}
I think it's to test "skipNormalizingIdentifier" option without Avro for now.

bq. Also, I +1'd PHOENIX-3751 earlier, though perhaps you're waiting for user 
feedback.
it got skipped from my radar, will commit along with this.

> Provide way to skip normalization of column names in phoenix-spark integration
> --
>
> Key: PHOENIX-3792
> URL: https://issues.apache.org/jira/browse/PHOENIX-3792
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3792.patch
>
>
> If the user is reading an AVRO file and writing to a Phoenix table with case 
> sensitive column names, then we should provide the user with an option to 
> skip the normalisation as it seems there is no way to escape double quotes 
> for the column names in Avro schema.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3360) Secondary index configuration is wrong

2017-04-16 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970391#comment-15970391
 ] 

Ankit Singhal edited comment on PHOENIX-3360 at 4/16/17 1:40 PM:
-

bq. Yes James Taylor we have observed index updates and meta updates are going 
with higher priority in case of RS->RS.
[~rajeshbabu], In the case of UPSERT SELECT running on server, we also need 
high priority RPCs for cross regionserver UPSERTs. Is that also covered?


was (Author: an...@apache.org):
bq. Yes James Taylor we have observed index updates and meta updates are going 
with higher priority in case of RS->RS.
[~rajeshbabu], In the case of UPSERT SELECT, we also need high priority RPCs 
for cross regionserver UPSERTs. Is that also covered?

> Secondary index configuration is wrong
> --
>
> Key: PHOENIX-3360
> URL: https://issues.apache.org/jira/browse/PHOENIX-3360
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: William Yang
>Priority: Critical
> Fix For: 4.10.0
>
> Attachments: ConfCP.java, indexlogging.patch, PHOENIX-3360.patch, 
> PHOENIX-3360-v2.PATCH, PHOENIX-3360-v3.PATCH, PHOENIX-3360-v4.PATCH
>
>
> IndexRpcScheduler allocates some handler threads and uses a higher priority 
> for RPCs. The corresponding IndexRpcController is not used by default as it 
> is, but used through ServerRpcControllerFactory that we configure from Ambari 
> by default which sets the priority of the outgoing RPCs to either metadata 
> priority, or the index priority.
> However, after reading code of IndexRpcController / ServerRpcController it 
> seems that the IndexRPCController DOES NOT look at whether the outgoing RPC 
> is for an Index table or not. It just sets ALL rpc priorities to be the index 
> priority. The intention seems to be the case that ONLY on servers, we 
> configure ServerRpcControllerFactory, and with clients we NEVER configure 
> ServerRpcControllerFactory, but instead use ClientRpcControllerFactory. We 
> configure ServerRpcControllerFactory from Ambari, which in affect makes it so 
> that ALL rpcs from Phoenix are only handled by the index handlers by default. 
> It means all deadlock cases are still there. 
> The documentation in https://phoenix.apache.org/secondary_indexing.html is 
> also wrong in this sense. It does not talk about server side / client side. 
> Plus this way of configuring different values is not how HBase configuration 
> is deployed. We cannot have the configuration show the 
> ServerRpcControllerFactory even only for server nodes, because the clients 
> running on those nodes will also see the wrong values. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3360) Secondary index configuration is wrong

2017-04-16 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970391#comment-15970391
 ] 

Ankit Singhal commented on PHOENIX-3360:


bq. Yes James Taylor we have observed index updates and meta updates are going 
with higher priority in case of RS->RS.
[~rajeshbabu], In the case of UPSERT SELECT, we also need high priority RPCs 
for cross regionserver UPSERTs. Is that also covered?

> Secondary index configuration is wrong
> --
>
> Key: PHOENIX-3360
> URL: https://issues.apache.org/jira/browse/PHOENIX-3360
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: William Yang
>Priority: Critical
> Fix For: 4.10.0
>
> Attachments: ConfCP.java, indexlogging.patch, PHOENIX-3360.patch, 
> PHOENIX-3360-v2.PATCH, PHOENIX-3360-v3.PATCH, PHOENIX-3360-v4.PATCH
>
>
> IndexRpcScheduler allocates some handler threads and uses a higher priority 
> for RPCs. The corresponding IndexRpcController is not used by default as it 
> is, but used through ServerRpcControllerFactory that we configure from Ambari 
> by default which sets the priority of the outgoing RPCs to either metadata 
> priority, or the index priority.
> However, after reading code of IndexRpcController / ServerRpcController it 
> seems that the IndexRPCController DOES NOT look at whether the outgoing RPC 
> is for an Index table or not. It just sets ALL rpc priorities to be the index 
> priority. The intention seems to be the case that ONLY on servers, we 
> configure ServerRpcControllerFactory, and with clients we NEVER configure 
> ServerRpcControllerFactory, but instead use ClientRpcControllerFactory. We 
> configure ServerRpcControllerFactory from Ambari, which in affect makes it so 
> that ALL rpcs from Phoenix are only handled by the index handlers by default. 
> It means all deadlock cases are still there. 
> The documentation in https://phoenix.apache.org/secondary_indexing.html is 
> also wrong in this sense. It does not talk about server side / client side. 
> Plus this way of configuring different values is not how HBase configuration 
> is deployed. We cannot have the configuration show the 
> ServerRpcControllerFactory even only for server nodes, because the clients 
> running on those nodes will also see the wrong values. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3721) CSV bulk load doesn't work well with SYSTEM.MUTEX

2017-04-16 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970299#comment-15970299
 ] 

Ankit Singhal edited comment on PHOENIX-3721 at 4/16/17 12:06 PM:
--

[~sergey.soldatov], it looks similar to 
https://issues.apache.org/jira/browse/HBASE-17170.

Workaround till we move to HBase-1.4:-
Include HBase jars also in HADOOP_CLASSPATH if you are running CSVBulkload 
using "hadoop jar" command

[~arobe...@fuze.com], see if workaround fix your problem for now. 


was (Author: an...@apache.org):
[~sergey.soldatov], it looks similar to 
https://issues.apache.org/jira/browse/HBASE-17170.

Workaround:-
Include HBase jars also in HADOOP_CLASSPATH if you are running CSVBulkload 
using "hadoop jar"

> CSV bulk load doesn't work well with SYSTEM.MUTEX
> -
>
> Key: PHOENIX-3721
> URL: https://issues.apache.org/jira/browse/PHOENIX-3721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Sergey Soldatov
>Priority: Blocker
>
> This is quite strange. I'm using HBase 1.2.4 and current master branch.
> During the running CSV bulk load in the regular way I got the following 
> exception: 
> {noformat}
> xception in thread "main" java.sql.SQLException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):
>  SYSTEM.MUTEX
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:208)
>   at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:337)
>   at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:329)
>   at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:209)
>   at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:183)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):
>  SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:285)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:106)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:58)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)
> {noformat}
> Checked the code and it seems 

[jira] [Updated] (PHOENIX-3792) Provide way to skip normalization of column names in phoenix-spark integration

2017-04-16 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3792:
---
Attachment: PHOENIX-3792.patch

WDYT, [~jmahonin]

> Provide way to skip normalization of column names in phoenix-spark integration
> --
>
> Key: PHOENIX-3792
> URL: https://issues.apache.org/jira/browse/PHOENIX-3792
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3792.patch
>
>
> If the user is reading an AVRO file and writing to a Phoenix table with case 
> sensitive column names, then we should provide the user with an option to 
> skip the normalisation as it seems there is no way to escape double quotes 
> for the column names in Avro schema.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3792) Provide way to skip normalization of column names in phoenix-spark integration

2017-04-16 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3792:
--

 Summary: Provide way to skip normalization of column names in 
phoenix-spark integration
 Key: PHOENIX-3792
 URL: https://issues.apache.org/jira/browse/PHOENIX-3792
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 4.11.0


If the user is reading an AVRO file and writing to a Phoenix table with case 
sensitive column names, then we should provide the user with an option to skip 
the normalisation as it seems there is no way to escape double quotes for the 
column names in Avro schema.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3721) CSV bulk load doesn't work well with SYSTEM.MUTEX

2017-04-16 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970299#comment-15970299
 ] 

Ankit Singhal commented on PHOENIX-3721:


[~sergey.soldatov], it looks similar to 
https://issues.apache.org/jira/browse/HBASE-17170.

Workaround:-
Include HBase jars also in HADOOP_CLASSPATH if you are running CSVBulkload 
using "hadoop jar"

> CSV bulk load doesn't work well with SYSTEM.MUTEX
> -
>
> Key: PHOENIX-3721
> URL: https://issues.apache.org/jira/browse/PHOENIX-3721
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Sergey Soldatov
>Priority: Blocker
>
> This is quite strange. I'm using HBase 1.2.4 and current master branch.
> During the running CSV bulk load in the regular way I got the following 
> exception: 
> {noformat}
> xception in thread "main" java.sql.SQLException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):
>  SYSTEM.MUTEX
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:208)
>   at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:337)
>   at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:329)
>   at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:209)
>   at 
> org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:183)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>   at 
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):
>  SYSTEM.MUTEX
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:285)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:106)
>   at 
> org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:58)
>   at 
> org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
>   at 
> org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
>   at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)
> {noformat}
> Checked the code and it seems that the problem is in the createSysMutexTable 
> function. Its expect TableExistsException (and skip it), but in my case the 
> exception is wrapped by RemoteException, so it's not skipped and the init 
> fails. The easy fix is to handle RemoteException and check that it wraps 
> TableExistsException, but it looks a bit  ugly.  
> [~jamestaylor] [~samarthjain] any thoughts? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-04-10 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962521#comment-15962521
 ] 

Ankit Singhal commented on PHOENIX-3757:


bq.  A little odd as we need to create this table after the namespace is 
created but before we upgrade the tables (as that process is what moves them 
around).
We should be creating mutex table in right namespace and doesn't need to rely 
on process of upgrading namespace tables as it doesn't have any meta data in 
SYSTEM.CATALOG table, 

* I think you may need to update following methods too:-
{code}
ConnectionQueryServicesImpl#releaseUpgradeMutex
ConnectionQueryServicesImpl#acquireUpgradeMutex
{code}

* it seems SYSTEM:MUTEX is needed during update of meta tables only , so how 
about moving createSysMutexTable() in upgradeSystemTables() and avoid this 
check:-
{code}
+
+try {
+// Create the mutex table if it doesn't already exist
+// so that it will be moved into the SYSTEM namespace if 
necessary
+createSysMutexTable(admin, props);
+} catch (IOException e) {
+// 1) SYSTEM:MUTEX does not exist, we couldn't create 
SYSTEM.MUTEX. We will fail
+//later when we try to access the table
+// 2) SYSTEM:MUTEX does exist, it was OK we couldn't make 
SYSTEM.MUTEX. Pass.
+}
{code}

> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3759) Dropping a local index causes NPE

2017-04-06 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3759:
---
Attachment: PHOENIX-3759.patch

During major compaction when all the local indexes are dropped, Empty HFile is 
created in L#0 store and exception will come in subsequent major compaction 
when we try to do an operation on first key(which is actually null).
Attach patch should catch it.

Workaround(who are affected by this):-
When all the local indices are dropped, then drop the local index column 
families(starting with L#) from descriptor to avoid this issue.

{code}
hbase(main):001:0> describe 'T'
hbase(main):002:0> alter 'T','delete'=>'L#0'
hbase(main):003:0> describe 'T'
{code}



> Dropping a local index causes NPE
> -
>
> Key: PHOENIX-3759
> URL: https://issues.apache.org/jira/browse/PHOENIX-3759
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
> Attachments: PHOENIX-3759.patch
>
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.KeyValue.createKeyValueFromKey(KeyValue.java:2773)
>   at 
> org.apache.phoenix.util.RepairUtil.isLocalIndexStoreFilesConsistent(RepairUtil.java:32)
>   at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preCompactScannerOpen(IndexHalfStoreFileReaderGenerator.java:197)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$6.call(RegionCoprocessorHost.java:521)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1621)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1697)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1660)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompactScannerOpen(RegionCoprocessorHost.java:516)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.preCreateCoprocScanner(Compactor.java:325)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.preCreateCoprocScanner(Compactor.java:315)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:266)
>   at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:64)
>   at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:121)
>   at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1154)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:1559)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.doCompaction(CompactSplitThread.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread$CompactionRunner.run(CompactSplitThread.java:540)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at java.util.concurrent.ThreadPoolExecutor$Work
> {noformat}
> Region server gets aborted due to this NPE. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3756) Users lacking ADMIN on 'SYSTEM' HBase namespace can't connect to Phoenix

2017-04-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956375#comment-15956375
 ] 

Ankit Singhal commented on PHOENIX-3756:


Thanks [~elserj] for the amendments, just one more fix and then we are good to 
go.

ensureNamespaceCreated is used by "CREATE SCHEMA " also, so please don't catch 
anything there and let the underprivileged user see the actual exception that 
can be sometimes accessDeniedException.

You just need to silently catch it for SYSTEM namespace in 
ensureSystemTablesUpgraded as per 
[comment|https://issues.apache.org/jira/browse/PHOENIX-3756?focusedCommentId=15955446&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15955446]
 

{code}
private boolean ensureSystemTablesUpgraded(ReadOnlyProps props)
throws SQLException, IOException, IllegalArgumentException, 
InterruptedException {
if (!SchemaUtil.isNamespaceMappingEnabled(PTableType.SYSTEM, props)) { 
return true; }
HTableInterface metatable = null;
try (HBaseAdmin admin = getAdmin()) {
// Namespace-mapping is enabled at this point.
try {
ensureNamespaceCreated(QueryConstants.SYSTEM_SCHEMA_NAME);
} catch (PhoenixIOException e) {
   
}

{code}

> Users lacking ADMIN on 'SYSTEM' HBase namespace can't connect to Phoenix
> 
>
> Key: PHOENIX-3756
> URL: https://issues.apache.org/jira/browse/PHOENIX-3756
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3756.001.patch, PHOENIX-3756.002.patch, 
> PHOENIX-3756.003.patch, PHOENIX-3756.004.patch, PHOENIX-3756.005.patch, 
> PHOENIX-3756.006.patch
>
>
> Follow-on from PHOENIX-3652:
> The fix provided in PHOENIX-3652 addressed the default situation where users 
> would need ADMIN on the default HBase namespace. However, when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}} and Phoenix creates its 
> system tables in the {{SYSTEM}} HBase namespace, unprivileged users (those 
> lacking ADMIN on {{SYSTEM}}) still cannot connect to Phoenix.
> The root-cause is essentially the same: the code tries to fetch the 
> {{NamespaceDescriptor}} for the {{SYSTEM}} namespace which requires the ADMIN 
> permission.
> https://github.com/apache/phoenix/blob/8093d10f1a481101d6c93fdf0744ff15ec48f4aa/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L1017-L1037



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3756) Users lacking ADMIN on 'SYSTEM' HBase namespace can't connect to Phoenix

2017-04-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955508#comment-15955508
 ] 

Ankit Singhal commented on PHOENIX-3756:


bq. The problem here was the case where the system tables exist and are 
properly configured, the client fails to connect as they receive the same 
AccessDeniedException trying to access the NamespaceDescriptor. I was intending 
to just ignore the whole issue of non-upgrade system tables.

Yes, That's why you just need to catch the exception thrown by 
ensureNamespaceCreated and ignore it. if there are tables which are not 
upgraded then flow will continue to upgrade them otherwise return normally. And 
there is no need to check SYSTEM namespace exists or not. because the user will 
get a proper exception if meta table doesn't exist and the code tries to create 
it in the non-existing namespace.



> Users lacking ADMIN on 'SYSTEM' HBase namespace can't connect to Phoenix
> 
>
> Key: PHOENIX-3756
> URL: https://issues.apache.org/jira/browse/PHOENIX-3756
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3756.001.patch, PHOENIX-3756.002.patch, 
> PHOENIX-3756.003.patch, PHOENIX-3756.004.patch, PHOENIX-3756.005.patch
>
>
> Follow-on from PHOENIX-3652:
> The fix provided in PHOENIX-3652 addressed the default situation where users 
> would need ADMIN on the default HBase namespace. However, when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}} and Phoenix creates its 
> system tables in the {{SYSTEM}} HBase namespace, unprivileged users (those 
> lacking ADMIN on {{SYSTEM}}) still cannot connect to Phoenix.
> The root-cause is essentially the same: the code tries to fetch the 
> {{NamespaceDescriptor}} for the {{SYSTEM}} namespace which requires the ADMIN 
> permission.
> https://github.com/apache/phoenix/blob/8093d10f1a481101d6c93fdf0744ff15ec48f4aa/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L1017-L1037



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3756) Users lacking ADMIN on 'SYSTEM' HBase namespace can't connect to Phoenix

2017-04-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955446#comment-15955446
 ] 

Ankit Singhal edited comment on PHOENIX-3756 at 4/4/17 5:16 PM:


Thanks [~elserj] for the update. 

* Can you also add this compatibility check when you are catching 
accessDeniedException for meta table so that we still be doing compatibility 
checks (for version compatibility and consistent namespace property) and will 
end a flow if SYSTEM.CATALOG table doesn't exist.

{code}
checkClientServerCompatibility(

SchemaUtil.getPhysicalName(SYSTEM_CATALOG_NAME_BYTES, 
this.getProps()).getName());
{code}

* we should not be returning early here, Ignore the exception and let 
"(tableNames.size() == 0) { return true; }" to take care the flow. 
NamespaceNotExist Exception will be thrown if non upgraded system table exists 
otherwise client can fail in later stage while accessing namespace mapped 
system tables.
{code}
+// Namespace-mapping is enabled at this point.
+try {
+ensureNamespaceCreated(QueryConstants.SYSTEM_SCHEMA_NAME);
+} catch (PhoenixIOException e) {
+// User might not be privileged to access the Phoenix system 
tables
+// in the HBase "SYSTEM" namespace (lacking 'ADMIN'). Let them 
proceed without
+// verifying the system table configuration.
+logger.warn("Could not access system namespace, assuming it 
exists");
+return false;
+}
{code}

** you may need to move code which removes SYSTEM.MUTEX table name from tables 
before tableNames.size() condition and wrap with TableName as this may be 
needed until PHOENIX-3757 is fixed. 
{code}
tableNames.remove(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME));
{code}


* And after above, we can remove this check.
{code}
 if (!ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.this.getProps())) {
+logger.debug("Failed to upgrade system 
tables, assuming they are properly configured.");
+success = true;
+return null;
+}
{code}



was (Author: an...@apache.org):
Thanks [~elserj] for the update. 

* Can you also add this compatibility check when you are caching 
accessDeniedException for meta table so that we still be doing compatibility 
checks (for version compatibility and consistent namespace property) and end a 
flow if SYSTEM.CATALOG table doesn't exists.

{code}
checkClientServerCompatibility(

SchemaUtil.getPhysicalName(SYSTEM_CATALOG_NAME_BYTES, 
this.getProps()).getName());
{code}

* we should not be returning early here, Ignore the exception and let 
"(tableNames.size() == 0) { return true; }" to take care the flow. 
NamespaceNotExist Exception will be thrown if non upgraded system table exists 
otherwise client can fail in later stage while accessing namespace mapped 
system tables.
{code}
+// Namespace-mapping is enabled at this point.
+try {
+ensureNamespaceCreated(QueryConstants.SYSTEM_SCHEMA_NAME);
+} catch (PhoenixIOException e) {
+// User might not be privileged to access the Phoenix system 
tables
+// in the HBase "SYSTEM" namespace (lacking 'ADMIN'). Let them 
proceed without
+// verifying the system table configuration.
+logger.warn("Could not access system namespace, assuming it 
exists");
+return false;
+}
{code}

** you may need to move code which removes SYSTEM.MUTEX table name from tables 
before tableNames.size() condition as this may be needed until PHOENIX-3757 is 
fixed. 
{code}
tableNames.remove(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME));
{code}


* And after above, we can remove this check.
{code}
 if (!ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.this.getProps())) {
+logger.debug("Failed to upgrade system 
tables, assuming they are properly configured.");
+success = true;
+return null;
+}
{code}


> Users lacking ADMIN on 'SYSTEM' HBase namespace can't connect to Phoenix
> 
>
> Key: PHOENIX-3756
> URL: https://issues.apache.org/jira/browse/PHOENIX-3756
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>

[jira] [Commented] (PHOENIX-3756) Users lacking ADMIN on 'SYSTEM' HBase namespace can't connect to Phoenix

2017-04-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15955446#comment-15955446
 ] 

Ankit Singhal commented on PHOENIX-3756:


Thanks [~elserj] for the update. 

* Can you also add this compatibility check when you are caching 
accessDeniedException for meta table so that we still be doing compatibility 
checks (for version compatibility and consistent namespace property) and end a 
flow if SYSTEM.CATALOG table doesn't exists.

{code}
checkClientServerCompatibility(

SchemaUtil.getPhysicalName(SYSTEM_CATALOG_NAME_BYTES, 
this.getProps()).getName());
{code}

* we should not be returning early here, Ignore the exception and let 
"(tableNames.size() == 0) { return true; }" to take care the flow. 
NamespaceNotExist Exception will be thrown if non upgraded system table exists 
otherwise client can fail in later stage while accessing namespace mapped 
system tables.
{code}
+// Namespace-mapping is enabled at this point.
+try {
+ensureNamespaceCreated(QueryConstants.SYSTEM_SCHEMA_NAME);
+} catch (PhoenixIOException e) {
+// User might not be privileged to access the Phoenix system 
tables
+// in the HBase "SYSTEM" namespace (lacking 'ADMIN'). Let them 
proceed without
+// verifying the system table configuration.
+logger.warn("Could not access system namespace, assuming it 
exists");
+return false;
+}
{code}

** you may need to move code which removes SYSTEM.MUTEX table name from tables 
before tableNames.size() condition as this may be needed until PHOENIX-3757 is 
fixed. 
{code}
tableNames.remove(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME));
{code}


* And after above, we can remove this check.
{code}
 if (!ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.this.getProps())) {
+logger.debug("Failed to upgrade system 
tables, assuming they are properly configured.");
+success = true;
+return null;
+}
{code}


> Users lacking ADMIN on 'SYSTEM' HBase namespace can't connect to Phoenix
> 
>
> Key: PHOENIX-3756
> URL: https://issues.apache.org/jira/browse/PHOENIX-3756
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3756.001.patch, PHOENIX-3756.002.patch, 
> PHOENIX-3756.003.patch, PHOENIX-3756.004.patch, PHOENIX-3756.005.patch
>
>
> Follow-on from PHOENIX-3652:
> The fix provided in PHOENIX-3652 addressed the default situation where users 
> would need ADMIN on the default HBase namespace. However, when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}} and Phoenix creates its 
> system tables in the {{SYSTEM}} HBase namespace, unprivileged users (those 
> lacking ADMIN on {{SYSTEM}}) still cannot connect to Phoenix.
> The root-cause is essentially the same: the code tries to fetch the 
> {{NamespaceDescriptor}} for the {{SYSTEM}} namespace which requires the ADMIN 
> permission.
> https://github.com/apache/phoenix/blob/8093d10f1a481101d6c93fdf0744ff15ec48f4aa/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L1017-L1037



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3571) Potential divide by zero exception in LongDivideExpression

2017-04-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954961#comment-15954961
 ] 

Ankit Singhal commented on PHOENIX-3571:


[~tedyu], This is expected as per the test case.
Do you want that we should assert for zero denominator instead of this stack 
trace?
or we represent X/0 as a NULL like MySQL?


> Potential divide by zero exception in LongDivideExpression
> --
>
> Key: PHOENIX-3571
> URL: https://issues.apache.org/jira/browse/PHOENIX-3571
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Ted Yu
>Priority: Minor
>
> Running SaltedIndexIT, I saw the following:
> {code}
> ===> 
> testExpressionThrowsException(org.apache.phoenix.end2end.index.IndexExpressionIT)
>  starts
> 2017-01-05 19:42:48,992 INFO  [main] client.HBaseAdmin: Created I
> 2017-01-05 19:42:48,996 INFO  [main] schema.MetaDataClient: Created index I 
> at 1483645369000
> 2017-01-05 19:42:49,066 WARN  [hconnection-0x5a45c218-shared--pool52-t6] 
> client.AsyncProcess: #38, table=T, attempt=1/35 failed=1ops, last exception: 
> org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: 
> org.apache.phoenix.hbase.index.builder.IndexBuildingFailureException: Failed 
> to build index for unexpected reason!
>   at 
> org.apache.phoenix.hbase.index.util.IndexManagementUtil.rethrowIndexingException(IndexManagementUtil.java:183)
>   at org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:204)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:974)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1660)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1734)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1692)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:970)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3218)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2984)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2926)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:718)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:680)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2065)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32393)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:238)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:218)
> Caused by: java.lang.ArithmeticException: / by zero
>   at 
> org.apache.phoenix.expression.LongDivideExpression.evaluate(LongDivideExpression.java:50)
>   at 
> org.apache.phoenix.index.IndexMaintainer.buildRowKey(IndexMaintainer.java:521)
>   at 
> org.apache.phoenix.index.IndexMaintainer.buildUpdateMutation(IndexMaintainer.java:859)
>   at 
> org.apache.phoenix.index.PhoenixIndexCodec.getIndexUpserts(PhoenixIndexCodec.java:76)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addCurrentStateMutationsForBatch(NonTxIndexBuilder.java:288)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addUpdateForGivenTimestamp(NonTxIndexBuilder.java:256)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.addMutationsForBatch(NonTxIndexBuilder.java:222)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.batchMutationAndAddUpdates(NonTxIndexBuilder.java:109)
>   at 
> org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder.getIndexUpdate(NonTxIndexBuilder.java:71)
>   at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:136)
>   at 
> org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:132)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:253)
>   at 
> com.google.common.util.concurrent.Abstract

[jira] [Commented] (PHOENIX-3572) Support FETCH NEXT| n ROWS from Cursor

2017-04-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954887#comment-15954887
 ] 

Ankit Singhal commented on PHOENIX-3572:


Thanks [~gsbiju], +1, Looks good to me. 
[~rajeshbabu], Do you think any impact on phoenix-calcite with this new 
construct, i.e how complex it is to support this syntax in calcite branch too?

> Support FETCH NEXT| n ROWS from Cursor
> --
>
> Key: PHOENIX-3572
> URL: https://issues.apache.org/jira/browse/PHOENIX-3572
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Biju Nair
>Assignee: Biju Nair
>
> Implement required changes to support 
> - {{DECLARE}} and {{OPEN}} a cursor
> - query {{FETCH NEXT | n ROWS}} from the cursor
> - {{CLOSE}} the cursor
> Based on the feedback in [PR 
> #192|https://github.com/apache/phoenix/pull/192], implement the changes using 
> {{ResultSet}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3751) spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException

2017-04-04 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3751:
---
Attachment: PHOENIX-3751.patch

> spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException
> -
>
> Key: PHOENIX-3751
> URL: https://issues.apache.org/jira/browse/PHOENIX-3751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase 1.14
> spark 2.10
> phoenix: 4.10
>Reporter: Nan Xu
>Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3751.patch
>
>
> create phoenix table: 
> create table phoenix.quote (  
> sym Varchar not null, 
> src varchar,
> kdbPublishTime time not null,
> location varchar,
> bid DOUBLE,
> ask DOUBLE,
> bsize unsigned_int,
> asize unsigned_int, 
> srcTime time,
> layer varchar,
> expiryTime time,
> quoteId varchar,
> recvTime time, 
> distTime  time,
> "TIME" time
> CONSTRAINT quote_pk PRIMARY KEY (sym, src, kdbPublishTime)) 
> COMPRESSION='SNAPPY', DATA_BLOCK_ENCODING='FAST_DIFF', VERSIONS=1000
> insert data:
> SYM   SRC KDBPUBLISHTIME  LOCATIONBID ASK BSIZE   ASIZE   
> SRCTIMELAYEREXPIRYTIME  QUOTEID   RECVTIME 
> DISTTIME TIME
> 6AH7  cme103:42:59N  0.7471   0.7506  20  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> 6AH7  cme103:42:59N  0.7474   0.7506  25  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> val spark = SparkSession
> .builder()
> .appName("load_avro")
> .master("local[1]")
> .config("spark.sql.warehouse.dir", "file:/tmp/spark-warehouse")
> .getOrCreate()
>  val df = spark.sqlContext.phoenixTableAsDataFrame("PHOENIX.QUOTE", 
> Seq("SYM","SRC", "EXPIRYTIME"), zkUrl = Some("a1.cluster:2181"))
>   df.show(100)
> problem is in PhoenixRDD:140
>  val rowSeq = columns.map { case (name, sqlType) =>
> val res = pr.resultMap(name)
>   // Special handling for data types
>   if (dateAsTimestamp && (sqlType == 91 || sqlType == 19)) { // 91 is 
> the defined type for Date and 19 for UNSIGNED_DATE
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
>   } else if (sqlType == 92 || sqlType == 18) { // 92 is the defined 
> type for Time and 18 for UNSIGNED_TIME
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
>   } else {
> res
>   }
>   }
> res.asInstanceOf[java.sql.Time].getTime could be null and get NPE.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3751) spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException

2017-04-04 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3751:
---
Fix Version/s: 4.11.0

> spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException
> -
>
> Key: PHOENIX-3751
> URL: https://issues.apache.org/jira/browse/PHOENIX-3751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase 1.14
> spark 2.10
> phoenix: 4.10
>Reporter: Nan Xu
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3751.patch
>
>
> create phoenix table: 
> create table phoenix.quote (  
> sym Varchar not null, 
> src varchar,
> kdbPublishTime time not null,
> location varchar,
> bid DOUBLE,
> ask DOUBLE,
> bsize unsigned_int,
> asize unsigned_int, 
> srcTime time,
> layer varchar,
> expiryTime time,
> quoteId varchar,
> recvTime time, 
> distTime  time,
> "TIME" time
> CONSTRAINT quote_pk PRIMARY KEY (sym, src, kdbPublishTime)) 
> COMPRESSION='SNAPPY', DATA_BLOCK_ENCODING='FAST_DIFF', VERSIONS=1000
> insert data:
> SYM   SRC KDBPUBLISHTIME  LOCATIONBID ASK BSIZE   ASIZE   
> SRCTIMELAYEREXPIRYTIME  QUOTEID   RECVTIME 
> DISTTIME TIME
> 6AH7  cme103:42:59N  0.7471   0.7506  20  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> 6AH7  cme103:42:59N  0.7474   0.7506  25  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> val spark = SparkSession
> .builder()
> .appName("load_avro")
> .master("local[1]")
> .config("spark.sql.warehouse.dir", "file:/tmp/spark-warehouse")
> .getOrCreate()
>  val df = spark.sqlContext.phoenixTableAsDataFrame("PHOENIX.QUOTE", 
> Seq("SYM","SRC", "EXPIRYTIME"), zkUrl = Some("a1.cluster:2181"))
>   df.show(100)
> problem is in PhoenixRDD:140
>  val rowSeq = columns.map { case (name, sqlType) =>
> val res = pr.resultMap(name)
>   // Special handling for data types
>   if (dateAsTimestamp && (sqlType == 91 || sqlType == 19)) { // 91 is 
> the defined type for Date and 19 for UNSIGNED_DATE
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
>   } else if (sqlType == 92 || sqlType == 18) { // 92 is the defined 
> type for Time and 18 for UNSIGNED_TIME
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
>   } else {
> res
>   }
>   }
> res.asInstanceOf[java.sql.Time].getTime could be null and get NPE.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3751) spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException

2017-04-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954854#comment-15954854
 ] 

Ankit Singhal commented on PHOENIX-3751:


Thanks [~angelfox] for reporting it with details, attaching a fix , see if you 
can apply on top of 4.10 release tag and use it.

> spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException
> -
>
> Key: PHOENIX-3751
> URL: https://issues.apache.org/jira/browse/PHOENIX-3751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase 1.14
> spark 2.10
> phoenix: 4.10
>Reporter: Nan Xu
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3751.patch
>
>
> create phoenix table: 
> create table phoenix.quote (  
> sym Varchar not null, 
> src varchar,
> kdbPublishTime time not null,
> location varchar,
> bid DOUBLE,
> ask DOUBLE,
> bsize unsigned_int,
> asize unsigned_int, 
> srcTime time,
> layer varchar,
> expiryTime time,
> quoteId varchar,
> recvTime time, 
> distTime  time,
> "TIME" time
> CONSTRAINT quote_pk PRIMARY KEY (sym, src, kdbPublishTime)) 
> COMPRESSION='SNAPPY', DATA_BLOCK_ENCODING='FAST_DIFF', VERSIONS=1000
> insert data:
> SYM   SRC KDBPUBLISHTIME  LOCATIONBID ASK BSIZE   ASIZE   
> SRCTIMELAYEREXPIRYTIME  QUOTEID   RECVTIME 
> DISTTIME TIME
> 6AH7  cme103:42:59N  0.7471   0.7506  20  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> 6AH7  cme103:42:59N  0.7474   0.7506  25  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> val spark = SparkSession
> .builder()
> .appName("load_avro")
> .master("local[1]")
> .config("spark.sql.warehouse.dir", "file:/tmp/spark-warehouse")
> .getOrCreate()
>  val df = spark.sqlContext.phoenixTableAsDataFrame("PHOENIX.QUOTE", 
> Seq("SYM","SRC", "EXPIRYTIME"), zkUrl = Some("a1.cluster:2181"))
>   df.show(100)
> problem is in PhoenixRDD:140
>  val rowSeq = columns.map { case (name, sqlType) =>
> val res = pr.resultMap(name)
>   // Special handling for data types
>   if (dateAsTimestamp && (sqlType == 91 || sqlType == 19)) { // 91 is 
> the defined type for Date and 19 for UNSIGNED_DATE
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
>   } else if (sqlType == 92 || sqlType == 18) { // 92 is the defined 
> type for Time and 18 for UNSIGNED_TIME
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
>   } else {
> res
>   }
>   }
> res.asInstanceOf[java.sql.Time].getTime could be null and get NPE.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (PHOENIX-3751) spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException

2017-04-04 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-3751:
--

Assignee: Ankit Singhal

> spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException
> -
>
> Key: PHOENIX-3751
> URL: https://issues.apache.org/jira/browse/PHOENIX-3751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: HBase 1.14
> spark 2.10
> phoenix: 4.10
>Reporter: Nan Xu
>Assignee: Ankit Singhal
> Fix For: 4.11.0
>
> Attachments: PHOENIX-3751.patch
>
>
> create phoenix table: 
> create table phoenix.quote (  
> sym Varchar not null, 
> src varchar,
> kdbPublishTime time not null,
> location varchar,
> bid DOUBLE,
> ask DOUBLE,
> bsize unsigned_int,
> asize unsigned_int, 
> srcTime time,
> layer varchar,
> expiryTime time,
> quoteId varchar,
> recvTime time, 
> distTime  time,
> "TIME" time
> CONSTRAINT quote_pk PRIMARY KEY (sym, src, kdbPublishTime)) 
> COMPRESSION='SNAPPY', DATA_BLOCK_ENCODING='FAST_DIFF', VERSIONS=1000
> insert data:
> SYM   SRC KDBPUBLISHTIME  LOCATIONBID ASK BSIZE   ASIZE   
> SRCTIMELAYEREXPIRYTIME  QUOTEID   RECVTIME 
> DISTTIME TIME
> 6AH7  cme103:42:59N  0.7471   0.7506  20  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> 6AH7  cme103:42:59N  0.7474   0.7506  25  25  
> 03:42:59   (null)   (null)  (null) 03:42:59 
> (null)  03:42:59
> val spark = SparkSession
> .builder()
> .appName("load_avro")
> .master("local[1]")
> .config("spark.sql.warehouse.dir", "file:/tmp/spark-warehouse")
> .getOrCreate()
>  val df = spark.sqlContext.phoenixTableAsDataFrame("PHOENIX.QUOTE", 
> Seq("SYM","SRC", "EXPIRYTIME"), zkUrl = Some("a1.cluster:2181"))
>   df.show(100)
> problem is in PhoenixRDD:140
>  val rowSeq = columns.map { case (name, sqlType) =>
> val res = pr.resultMap(name)
>   // Special handling for data types
>   if (dateAsTimestamp && (sqlType == 91 || sqlType == 19)) { // 91 is 
> the defined type for Date and 19 for UNSIGNED_DATE
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
>   } else if (sqlType == 92 || sqlType == 18) { // 92 is the defined 
> type for Time and 18 for UNSIGNED_TIME
> new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
>   } else {
> res
>   }
>   }
> res.asInstanceOf[java.sql.Time].getTime could be null and get NPE.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (PHOENIX-3749) ERROR 502 (42702): Column reference ambiguous when order by duplicate names

2017-04-04 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-3749.

   Resolution: Duplicate
Fix Version/s: 4.9.0

> ERROR 502 (42702): Column reference ambiguous when order by duplicate names
> ---
>
> Key: PHOENIX-3749
> URL: https://issues.apache.org/jira/browse/PHOENIX-3749
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: CentOS6 JDK1.7.0_79
> x86_64 
>Reporter: chen weihua
> Fix For: 4.9.0
>
>
> Tableau generates queries that alias a fully-qualified column name to its 
> shortened name.
> Similar to:
> select room_id, sum(error_code403) as error_code403
> from dla_node_main
> group by room_id 
> order by error_code403
> Phoenix reports:
> Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. 
> columnName=ERROR_CODE403
> SQLState:  42702
> ErrorCode: 502



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3749) ERROR 502 (42702): Column reference ambiguous when order by duplicate names

2017-04-04 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954731#comment-15954731
 ] 

Ankit Singhal commented on PHOENIX-3749:


Duplicate of PHOENIX-2930

> ERROR 502 (42702): Column reference ambiguous when order by duplicate names
> ---
>
> Key: PHOENIX-3749
> URL: https://issues.apache.org/jira/browse/PHOENIX-3749
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: CentOS6 JDK1.7.0_79
> x86_64 
>Reporter: chen weihua
>
> Tableau generates queries that alias a fully-qualified column name to its 
> shortened name.
> Similar to:
> select room_id, sum(error_code403) as error_code403
> from dla_node_main
> group by room_id 
> order by error_code403
> Phoenix reports:
> Error: ERROR 502 (42702): Column reference ambiguous or duplicate names. 
> columnName=ERROR_CODE403
> SQLState:  42702
> ErrorCode: 502



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: [VOTE] Release of Apache Phoenix 4.10.0 RC0

2017-03-09 Thread Ankit Singhal
PHOENIX-3726 could be a blocker for this release.

On Wed, Mar 8, 2017 at 3:40 AM, James Taylor  wrote:

> Hello Everyone,
>
> This is a call for a vote on Apache Phoenix 4.10.0 RC0. This is the next
> minor release of Phoenix 4, compatible with Apache HBase 0.98, 1.1 & 1.2.
> The release includes both a source-only release and a convenience binary
> release for each supported HBase version.
>
> This release has feature parity with supported HBase versions and includes
> the following improvements:
> - Introduce indirection between column name and qualifier [1]
> - Store data for append-only tables in single cell [2]
> - Support Spark 2.0 [3]
> - Provide Kafka Phoenix consumer [4]
> - Distribute UPSERT SELECT across cluster [5]
> - Improve Hive PhoenixStorageHandler [6]
> - Fix more than 40 bugs
>
> The source tarball, including signatures, digests, etc can be found at:
> https://dist.apache.org/repos/dist/dev/phoenix/apache-
> phoenix-4.10.0-HBase-0.98-rc0/src/
> https://dist.apache.org/repos/dist/dev/phoenix/apache-
> phoenix-4.10.0-HBase-1.1-rc0/src/
> https://dist.apache.org/repos/dist/dev/phoenix/apache-
> phoenix-4.10.0-HBase-1.2-rc0/src/
>
> The binary artifacts can be found at:
> https://dist.apache.org/repos/dist/dev/phoenix/apache-
> phoenix-4.10.0-HBase-0.98-rc0/bin/
> https://dist.apache.org/repos/dist/dev/phoenix/apache-
> phoenix-4.10.0-HBase-1.1-rc0/bin/
> https://dist.apache.org/repos/dist/dev/phoenix/apache-
> phoenix-4.10.0-HBase-1.2-rc0/bin/
>
> For a complete list of changes, see:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> projectId=12315120&version=12338126
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/mujtaba.asc
> https://dist.apache.org/repos/dist/release/phoenix/KEYS
>
> The hash and tag to be voted upon:
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=
> c07714c3c94ce85ac4257cfd7e26453f3f5f4232
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> h=refs/tags/v4.10.0-HBase-0.98-rc0
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=
> dc1a60543c1cca25581d13cabedd733276fceb2c
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> h=refs/tags/v4.10.0-HBase-1.1-rc0
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=
> 2c66e3cbd085f89a0631891839242e24a63f33fc
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> h=refs/tags/v4.10.0-HBase-1.2-rc0
> ​
>
> Vote will be open for at least 72 hours. Please vote:
>
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
>
> Thanks,
> The Apache Phoenix Team
>
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-1598
> [2] https://issues.apache.org/jira/browse/PHOENIX-2565
> [3] https://issues.apache.org/jira/browse/PHOENIX-
> [4] https://issues.apache.org/jira/browse/PHOENIX-3214
> [5] https://issues.apache.org/jira/browse/PHOENIX-3271
> [6] https://issues.apache.org/jira/browse/PHOENIX-3346
>


[jira] [Updated] (PHOENIX-3726) Error while upgrading system tables

2017-03-09 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3726:
---
Fix Version/s: 4.10.0

> Error while upgrading system tables
> ---
>
> Key: PHOENIX-3726
> URL: https://issues.apache.org/jira/browse/PHOENIX-3726
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>    Reporter: Ankit Singhal
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3726.patch
>
>
> {code}
> Error: java.lang.IllegalArgumentException: Expected 4 system table only but 
> found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS] (state=,code=0)
> java.sql.SQLException: java.lang.IllegalArgumentException: Expected 4 system 
> table only but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, 
> SYSTEM.SEQUENCE, SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:661)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.lang.IllegalArgumentException: Expected 4 system table only 
> but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.java:3091)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$600(ConnectionQueryServicesImpl.java:260)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2418)
>   ... 20 more
> {code}
> ping [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3726) Error while upgrading system tables

2017-03-09 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3726:
---
Priority: Blocker  (was: Major)

> Error while upgrading system tables
> ---
>
> Key: PHOENIX-3726
> URL: https://issues.apache.org/jira/browse/PHOENIX-3726
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>    Reporter: Ankit Singhal
>Priority: Blocker
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3726.patch
>
>
> {code}
> Error: java.lang.IllegalArgumentException: Expected 4 system table only but 
> found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS] (state=,code=0)
> java.sql.SQLException: java.lang.IllegalArgumentException: Expected 4 system 
> table only but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, 
> SYSTEM.SEQUENCE, SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:661)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.lang.IllegalArgumentException: Expected 4 system table only 
> but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.java:3091)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$600(ConnectionQueryServicesImpl.java:260)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2418)
>   ... 20 more
> {code}
> ping [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3726) Error while upgrading system tables

2017-03-09 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15902685#comment-15902685
 ] 

Ankit Singhal commented on PHOENIX-3726:


[~chrajeshbab...@gmail.com], please review and commit if necessary as I'll be 
on leave till 2nd April.

> Error while upgrading system tables
> ---
>
> Key: PHOENIX-3726
> URL: https://issues.apache.org/jira/browse/PHOENIX-3726
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Ankit Singhal
> Attachments: PHOENIX-3726.patch
>
>
> {code}
> Error: java.lang.IllegalArgumentException: Expected 4 system table only but 
> found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS] (state=,code=0)
> java.sql.SQLException: java.lang.IllegalArgumentException: Expected 4 system 
> table only but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, 
> SYSTEM.SEQUENCE, SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:661)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.lang.IllegalArgumentException: Expected 4 system table only 
> but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.java:3091)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$600(ConnectionQueryServicesImpl.java:260)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2418)
>   ... 20 more
> {code}
> ping [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3726) Error while upgrading system tables

2017-03-09 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3726:
---
Attachment: PHOENIX-3726.patch

> Error while upgrading system tables
> ---
>
> Key: PHOENIX-3726
> URL: https://issues.apache.org/jira/browse/PHOENIX-3726
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>    Reporter: Ankit Singhal
> Attachments: PHOENIX-3726.patch
>
>
> {code}
> Error: java.lang.IllegalArgumentException: Expected 4 system table only but 
> found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS] (state=,code=0)
> java.sql.SQLException: java.lang.IllegalArgumentException: Expected 4 system 
> table only but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, 
> SYSTEM.SEQUENCE, SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:661)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
> Caused by: java.lang.IllegalArgumentException: Expected 4 system table only 
> but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
> SYSTEM.STATS]
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.java:3091)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$600(ConnectionQueryServicesImpl.java:260)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2418)
>   ... 20 more
> {code}
> ping [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3726) Error while upgrading system tables

2017-03-09 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3726:
--

 Summary: Error while upgrading system tables
 Key: PHOENIX-3726
 URL: https://issues.apache.org/jira/browse/PHOENIX-3726
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
Reporter: Ankit Singhal
 Attachments: PHOENIX-3726.patch

{code}
Error: java.lang.IllegalArgumentException: Expected 4 system table only but 
found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
SYSTEM.STATS] (state=,code=0)
java.sql.SQLException: java.lang.IllegalArgumentException: Expected 4 system 
table only but found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, 
SYSTEM.SEQUENCE, SYSTEM.STATS]
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: java.lang.IllegalArgumentException: Expected 4 system table only but 
found 5:[SYSTEM.CATALOG, SYSTEM.FUNCTION, SYSTEM.MUTEX, SYSTEM.SEQUENCE, 
SYSTEM.STATS]
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureSystemTablesUpgraded(ConnectionQueryServicesImpl.java:3091)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$600(ConnectionQueryServicesImpl.java:260)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2418)
... 20 more
{code}

ping [~giacomotaylor]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3649) higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-06 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15896988#comment-15896988
 ] 

Ankit Singhal commented on PHOENIX-3649:


bq. IMO, it'd be better to mark it as inactive before starting the index build.
ok, Made the necessary changes in v3 patch.

Thanks [~giacomotaylor] for the review. Committed the change to 4.x branches 
and master. And a similar fix to 4.9 branches as well.

> higher memory consumption on RS leading to OOM/abort on immutable index 
> creation with multiple regions on single RS
> ---
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649_4.9_branch.patch, PHOENIX-3649.patch, 
> PHOENIX-3649_v1.patch, PHOENIX-3649_v2.patch, PHOENIX-3649_v3.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3649) higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-06 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3649:
---
Attachment: PHOENIX-3649_4.9_branch.patch

> higher memory consumption on RS leading to OOM/abort on immutable index 
> creation with multiple regions on single RS
> ---
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>    Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649_4.9_branch.patch, PHOENIX-3649.patch, 
> PHOENIX-3649_v1.patch, PHOENIX-3649_v2.patch, PHOENIX-3649_v3.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3649) higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-06 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3649:
---
Summary: higher memory consumption on RS leading to OOM/abort on immutable 
index creation with multiple regions on single RS  (was: After PHOENIX-3271 
higher memory consumption on RS leading to OOM/abort on immutable index 
creation with multiple regions on single RS)

> higher memory consumption on RS leading to OOM/abort on immutable index 
> creation with multiple regions on single RS
> ---
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>    Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch, 
> PHOENIX-3649_v2.patch, PHOENIX-3649_v3.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-06 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3649:
---
Attachment: PHOENIX-3649_v3.patch

> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>    Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch, 
> PHOENIX-3649_v2.patch, PHOENIX-3649_v3.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-05 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15896765#comment-15896765
 ] 

Ankit Singhal commented on PHOENIX-3649:


bq. The only problem is that the index will be active, but it won't be 
consistent with the data table until the buildIndex you added is complete. 
Perhaps we should mark the index as inactive while the buildIndex is running?

Index can also be marked inactive by other processes as well(like Index is 
disabled and automatic rebuilding kicked-off by making it inactive or etc) so 
if we make it inactive for this process and start to buildIndex, then we can't 
make it active because we are not sure that if other processes are completed or 
not. 

Let me know if there are any other comments or I can go ahead and commit it.



> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch, 
> PHOENIX-3649_v2.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-02 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15892131#comment-15892131
 ] 

Ankit Singhal edited comment on PHOENIX-3649 at 3/2/17 12:05 PM:
-

Thanks [~giacomotaylor] for explaining it in detail but this will work when new 
upserts which are coming at LATEST_TIMESTAMP during index building phase but 
the case is different with UPSERT SELECT. Consider that UPSERT SELECT started 
at timestamp t0 and writing data at t0 only, and the index is created parallely 
at timestamp t1 (t1>t0). So, as per the current logic of index building, in 
first pass , we build index from 0 to t2(t1+some more seconds) and in second 
pass we build from (t1-building time in the first pass) to t2 which may not 
include t0 timestamp , but still UPSERT SELECT is running which is writing data 
at t0, no mutation for new index will be added on server and there is no run to 
build the new data written at t0. So, in a new patch, I'm building the new 
index at t0 to include the data which fix 
ImmutableIndexIT#testCreateIndexDuringUpsertSelect. let me know if it is fine 
now or we can do anything else.

bq. We set the cell timestamp in MutationState (based on a return of 
MutationState.validate()) so that all of the mutations for an UPSERT SELECT 
have a consistent timestamp. Since the server-side execution is bypassing 
MutationState, we're skipping that (and for the same reason, you're right, we 
can't run it server side when an immutable table has indexes).
Sorry for the confusion, in the earlier patch or PHOENIX-3271, we were doing 
upsert at the compile time of the statement only. we were using scan max time 
which is capped at the compile time of statement only.




was (Author: an...@apache.org):
Thanks [~giacomotaylor] for explaining it in detail but this will work when new 
upserts which are coming at LATEST_TIMESTAMP during index building phase but 
the case is different with UPSERT SELECT. Consider that UPSERT SELECT started 
at timestamp t0 and writing data at t0 only, and the index is created parallely 
at timestamp t1 (t1>t0). So, as per the current logic of index building, in 
first pass , we build index from 0 to t2(t1+some more seconds) and in second 
pass we build from (t1-building time in the first pass) to t2 which may not 
include t0 timestamp , but still UPSERT SELECT is running which is writing data 
at t0, no mutation for new index will be added on server and there is no run to 
build the new data written at t0. So, in a new patch, I'm building the new 
index at t0 to include the data which fix 
ImmutableIndexIT#testCreateIndexDuringUpsertSelect. let me know if it is fine 
now or we can do anything else.

bq. We set the cell timestamp in MutationState (based on a return of 
MutationState.validate()) so that all of the mutations for an UPSERT SELECT 
have a consistent timestamp. Since the server-side execution is bypassing 
MutationState, we're skipping that (and for the same reason, you're right, we 
can't run it server side when an immutable table has indexes).
Sorry for the confusion, in the last patch too, we were doing upsert at the 
compile time of the statement only. we were using scan max time which is capped 
at the compile time of statement only.



> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch, 
> PHOENIX-3649_v2.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {no

[jira] [Updated] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-02 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3649:
---
Attachment: PHOENIX-3649_v2.patch

> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>    Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch, 
> PHOENIX-3649_v2.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-02 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15892131#comment-15892131
 ] 

Ankit Singhal commented on PHOENIX-3649:


Thanks [~giacomotaylor] for explaining it in detail but this will work when new 
upserts which are coming at LATEST_TIMESTAMP during index building phase but 
the case is different with UPSERT SELECT. Consider that UPSERT SELECT started 
at timestamp t0 and writing data at t0 only, and the index is created parallely 
at timestamp t1 (t1>t0). So, as per the current logic of index building, in 
first pass , we build index from 0 to t2(t1+some more seconds) and in second 
pass we build from (t1-building time in the first pass) to t2 which may not 
include t0 timestamp , but still UPSERT SELECT is running which is writing data 
at t0, no mutation for new index will be added on server and there is no run to 
build the new data written at t0. So, in a new patch, I'm building the new 
index at t0 to include the data which fix 
ImmutableIndexIT#testCreateIndexDuringUpsertSelect. let me know if it is fine 
now or we can do anything else.

bq. We set the cell timestamp in MutationState (based on a return of 
MutationState.validate()) so that all of the mutations for an UPSERT SELECT 
have a consistent timestamp. Since the server-side execution is bypassing 
MutationState, we're skipping that (and for the same reason, you're right, we 
can't run it server side when an immutable table has indexes).
Sorry for the confusion, in the last patch too, we were doing upsert at the 
compile time of the statement only. we were using scan max time which is capped 
at the compile time of statement only.



> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>        Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3583) Prepare IndexMaintainer on server itself

2017-03-01 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889845#comment-15889845
 ] 

Ankit Singhal commented on PHOENIX-3583:


bq. We have logic on the client-side in MutationState that detects this 
condition. We also have logic in our index building code for this (and tests as 
well). If there's a known issue, we should fix it.

can you please point me to the code where we are checking for new Index before 
building Index mutations at server?

> Prepare IndexMaintainer on server itself
> 
>
> Key: PHOENIX-3583
> URL: https://issues.apache.org/jira/browse/PHOENIX-3583
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Attachments: PHOENIX-3583.patch
>
>
> -- reuse the cache of PTable and it's lifecycle.
> -- With the new implementation, we will be doing RPC to meta table per mini 
> batch which could be an overhead, but the same configuration 
> "updateCacheFrequency" can be used to control a frequency of touching 
> SYSTEM.CATALOG endpoint for updated Ptable or index maintainers. 
> -- It is expected that 99% of the time the table is old and RPC will be 
> returned with an empty result(so it may be less costly), as opposed to the 
> current implementation where we have to send the index maintainer payload to 
> each region server per upsert batch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-03-01 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889837#comment-15889837
 ] 

Ankit Singhal commented on PHOENIX-3649:


bq. Does PHOENIX-3271 set the time stamp of the distributed upsert to the time 
stamp of when the query was started/compiled? We'd want to pass the time stamp 
over from the client so that we're consistent across all region servers. If the 
time stamp is set correctly, then 
ImmutableIndexIT#testCreateIndexDuringUpsertSelect should be ok.
No, we don't pass the timestamp of compilation as I thought it was needed only 
to cap the query to not read the new data but with read isolation, we should 
not need this right? or do you want updates to go at client timestamp even if 
SCN is also not set? we can't run UPSERT SELECT on the server for immutable 
tables having indexes because Index maintenance of immutable is still handled 
at the client.

bq. Otherwise, if it's not working for immutable tables, I'd expect it's not 
working for mutable tables either
Yes, there will be the same problem if the mutable index is created during 
Upsert Select on the table.

But currently also we have this problem right when the batch is sent to the 
server with index maintainers (in cache or with mutations) then Index created 
during that time will not get the updates on the fly. see PHOENIX-3583.

> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3649) After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on immutable index creation with multiple regions on single RS

2017-02-27 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3649:
---
Attachment: PHOENIX-3649_v1.patch

v1 patch for test execution.

[~giacomotaylor], if the first index is created during Upsert Select is running 
on an immutable table, then at server side, we will not see a new index and 
updates to the index will be missed. For now, I have commented the optimisation 
of UPSERT SELECT on SERVER for immutable tables altogether. Let me know if you 
have any good idea to prevent this.

{code}
-&& !(table.isImmutableRows() && 
!table.getIndexes().isEmpty())
+//TODO UPSERT SELECT on Immutable tables 
without the indexes can also be optimized
+//but we need to handle case when index is 
created during UPSERT SELECT is already started
+//see 
ImmutableIndexIT#testCreateIndexDuringUpsertSelect
+&& !(table.isImmutableRows() /*&& 
!table.getIndexes().isEmpty()*/)
{code}

> After PHOENIX-3271 higher memory consumption on RS leading to OOM/abort on 
> immutable index creation with multiple regions on single RS
> --
>
> Key: PHOENIX-3649
> URL: https://issues.apache.org/jira/browse/PHOENIX-3649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>        Reporter: Mujtaba Chohan
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.9.1, 4.10.0
>
> Attachments: PHOENIX-3649.patch, PHOENIX-3649_v1.patch
>
>
> *Configuration*
> hbase-0.98.23 standalone
> Heap 5GB
> *When*
> Verified that this happens after PHOENIX-3271 Distribute UPSERT SELECT across 
> cluster. 
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commitdiff;h=accd4a276d1085e5d1069caf93798d8f301e4ed6
> To repro
> {noformat}
> CREATE TABLE INDEXED_TABLE (HOST CHAR(2) NOT NULL,DOMAIN VARCHAR NOT NULL, 
> FEATURE VARCHAR NOT NULL,DATE DATE NOT NULL,USAGE.CORE BIGINT,USAGE.DB 
> BIGINT,STATS.ACTIVE_VISITOR INTEGER CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, 
> FEATURE, DATE)) IMMUTABLE_ROWS=true,MAX_FILESIZE=30485760
> {noformat}
> Upsert 2M rows (CSV is available at https://goo.gl/OsTSKB) that will create 
> ~4 regions on a single RS and then create index with data present
> {noformat}
> CREATE INDEX idx5 ON INDEXED_TABLE (CORE) INCLUDE (DB,ACTIVE_VISITOR)
> {noformat}
> From RS log
> {noformat}
> 2017-02-02 13:29:06,899 WARN  [rs,51371,1486070044538-HeapMemoryChore] 
> regionserver.HeapMemoryManager: heapOccupancyPercent 0.97875696 is above heap 
> occupancy alarm watermark (0.95)
> 2017-02-02 13:29:18,198 INFO  [SessionTracker] server.ZooKeeperServer: 
> Expiring session 0x15a00ad4f31, timeout of 1ms exceeded
> 2017-02-02 13:29:18,231 WARN  [JvmPauseMonitor] util.JvmPauseMonitor: 
> Detected pause in JVM or host machine (eg GC): pause of approximately 10581ms
> GC pool 'ParNew' had collection(s): count=4 time=139ms
> 2017-02-02 13:29:19,669 FATAL [RS:0;rs:51371-EventThread] 
> regionserver.HRegionServer: ABORTING region server rs,51371,1486070044538: 
> regionserver:51371-0x15a00ad4f31, quorum=localhost:2181, baseZNode=/hbase 
> regionserver:51371-0x15a00ad4f31 received expired from ZooKeeper, aborting
> {noformat}
> Prior to the change index creation succeeds with as little as 2GB heap.
> [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3694) Drop schema does not invalidate schema from the server cache

2017-02-27 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3694:
---
Attachment: PHOENIX-3694_v1.patch

> Drop schema does not invalidate schema from the server cache
> 
>
> Key: PHOENIX-3694
> URL: https://issues.apache.org/jira/browse/PHOENIX-3694
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3694.patch, PHOENIX-3694_v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (PHOENIX-3694) Drop schema does not invalidate schema from the server cache

2017-02-27 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15885634#comment-15885634
 ] 

Ankit Singhal commented on PHOENIX-3694:


Thanks [~sergey.soldatov], committed v1 in 4.x-HBase-0.98 ,4.x-HBase-1.1 and 
master.
Not committed to 4.x-HBase-1.3 as it doesn't seems to be up to date.

> Drop schema does not invalidate schema from the server cache
> 
>
> Key: PHOENIX-3694
> URL: https://issues.apache.org/jira/browse/PHOENIX-3694
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3694.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3694) Drop schema does not invalidate schema from the server cache

2017-02-24 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3694:
---
Attachment: PHOENIX-3694.patch

> Drop schema does not invalidate schema from the server cache
> 
>
> Key: PHOENIX-3694
> URL: https://issues.apache.org/jira/browse/PHOENIX-3694
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>    Reporter: Ankit Singhal
>    Assignee: Ankit Singhal
> Fix For: 4.10.0
>
> Attachments: PHOENIX-3694.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (PHOENIX-3694) Drop schema does not invalidate schema from the server cache

2017-02-24 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-3694:
--

 Summary: Drop schema does not invalidate schema from the server 
cache
 Key: PHOENIX-3694
 URL: https://issues.apache.org/jira/browse/PHOENIX-3694
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 4.10.0






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (PHOENIX-3688) Rebuild(ALTER INDEX IDX ON TABLE REBUILD) of indexes created on table having row_timestamp column will result in no data visible to the User for that Index.

2017-02-23 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3688:
---
Reporter: Cheng Xu  (was: Ankit Singhal)

> Rebuild(ALTER INDEX IDX ON TABLE REBUILD) of indexes created on table having 
> row_timestamp column will result in no data visible to the User for that 
> Index.
> 
>
> Key: PHOENIX-3688
> URL: https://issues.apache.org/jira/browse/PHOENIX-3688
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Cheng Xu
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


<    4   5   6   7   8   9   10   11   12   13   >