[jira] [Commented] (PHOENIX-3598) Enable proxy access to Phoenix query server for third party on behalf of end users

2017-07-12 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16083553#comment-16083553
 ] 

Devaraj Das commented on PHOENIX-3598:
--

LGTM. Nice tests.

> Enable proxy access to Phoenix query server for third party on behalf of end 
> users
> --
>
> Key: PHOENIX-3598
> URL: https://issues.apache.org/jira/browse/PHOENIX-3598
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jerry He
>Assignee: Shi Wang
> Attachments: 0001-PHOENIX-3598.patch, PHOENIX-3598.001.patch, 
> PHOENIX-3598.002.patch
>
>
> This JIRA tracks the follow-on work of CALCITE-1539 needed on Phoenix query 
> server side.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4010) Hash Join cache may not be send to all regionservers when we have stale HBase meta cache

2017-07-11 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082697#comment-16082697
 ] 

Devaraj Das commented on PHOENIX-4010:
--

Yeah +1 on retrying the whole query. Seems simpler and hopefully wouldn't need 
to do that often anyway.

> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> 
>
> Key: PHOENIX-4010
> URL: https://issues.apache.org/jira/browse/PHOENIX-4010
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4010.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
> while (true) {
> try {
> // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
> // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
> // to this implementation class, so it's probably not worth 
> it.
> List locations = Lists.newArrayList();
> byte[] currentKey = HConstants.EMPTY_START_ROW;
> do {
> HRegionLocation regionLocation = 
> connection.getRegionLocation(
> TableName.valueOf(tableName), currentKey, reload);
> locations.add(regionLocation);
> currentKey = regionLocation.getRegionInfo().getEndKey();
> } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
> return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
> int nRegions = locations.size();
> 
> .
>  if ( ! servers.contains(entry) && 
> keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
> cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
> // Call RPC once per server
> servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3360) Secondary index configuration is wrong

2016-10-06 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15553907#comment-15553907
 ] 

Devaraj Das commented on PHOENIX-3360:
--

For the short term, can we do this - don't define this configuration in any of 
the configuration files. So clients use the default RPC controller. But in the 
regionserver, in the phoenix code path where it instantiates configuration 
objects, we manually set the configuration to use ServerRpcController. Then all 
connections using that config object sees that...

> Secondary index configuration is wrong
> --
>
> Key: PHOENIX-3360
> URL: https://issues.apache.org/jira/browse/PHOENIX-3360
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Priority: Critical
>
> IndexRpcScheduler allocates some handler threads and uses a higher priority 
> for RPCs. The corresponding IndexRpcController is not used by default as it 
> is, but used through ServerRpcControllerFactory that we configure from Ambari 
> by default which sets the priority of the outgoing RPCs to either metadata 
> priority, or the index priority.
> However, after reading code of IndexRpcController / ServerRpcController it 
> seems that the IndexRPCController DOES NOT look at whether the outgoing RPC 
> is for an Index table or not. It just sets ALL rpc priorities to be the index 
> priority. The intention seems to be the case that ONLY on servers, we 
> configure ServerRpcControllerFactory, and with clients we NEVER configure 
> ServerRpcControllerFactory, but instead use ClientRpcControllerFactory. We 
> configure ServerRpcControllerFactory from Ambari, which in affect makes it so 
> that ALL rpcs from Phoenix are only handled by the index handlers by default. 
> It means all deadlock cases are still there. 
> The documentation in https://phoenix.apache.org/secondary_indexing.html is 
> also wrong in this sense. It does not talk about server side / client side. 
> Plus this way of configuring different values is not how HBase configuration 
> is deployed. We cannot have the configuration show the 
> ServerRpcControllerFactory even only for server nodes, because the clients 
> running on those nodes will also see the wrong values. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-28 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531520#comment-15531520
 ] 

Devaraj Das commented on PHOENIX-3159:
--

+1 

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch, PHOENIX-3159_v4.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size). 
> But still we should handle the closing of 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-19 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504316#comment-15504316
 ] 

Devaraj Das commented on PHOENIX-3159:
--

Sorry [~jamestaylor] for the delay.. Hopefully if [~an...@apache.org] can 
answer the questions quickly, we can still get this in in 4.8.1
1. Shouldn't the "pool" in CachingHTableFactory not be shut down if the pool 
was initially obtained via getTable(tableName, pool)?
2. For the case you create the pool, should the maxThreads be simply 
Integer.MAX_VALUE? Assuming that *all* table accesses go through this 
CachingHTableFactory, it should be okay (no leaks and such)? Saying this 
because for the case the pool is passed from outside, it seems it gets created 
with Integer.MAX_VALUE.
3. After the pool is shutdown, wondering if you should be doing a 
pool.awaitTermination() or else the threads will be killed if the JVM is 
exiting (this should be verified though; I am recalling this from memory). 
Wondering if this (threads did partial work and then exited) actually matters 
in practice or not for Phoenix.


> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-16 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496617#comment-15496617
 ] 

Devaraj Das commented on PHOENIX-3159:
--

Will get to it today or over the weekend James

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size). 
> But still we should 

[jira] [Commented] (PHOENIX-3175) Unnecessary UGI proxy user impersonation check

2016-08-16 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422858#comment-15422858
 ] 

Devaraj Das commented on PHOENIX-3175:
--

Yes [~elserj]. +1

> Unnecessary UGI proxy user impersonation check
> --
>
> Key: PHOENIX-3175
> URL: https://issues.apache.org/jira/browse/PHOENIX-3175
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3175.001.patch
>
>
> While looking at some issues reported by [~jpplayer], we found that some of 
> the default Hadoop proxyuser configuration properties in core-site.xml 
> weren't working as intended.
> [~devaraj]'s keen eye noticed that PQS was doing an unnecessary UGI ProxyUser 
> impersonation check.
> We can rely on the SPNEGO authentication as the barrier to use PQS, but then 
> defer to the impersonation check that HBase is already doing as to which PQS 
> instances are allowed to talk to HBase (defined by core-site.xml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-3175) Unnecessary UGI proxy user impersonation check

2016-08-16 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-3175:
-
Comment: was deleted

(was: Yes [~elserj]. +1)

> Unnecessary UGI proxy user impersonation check
> --
>
> Key: PHOENIX-3175
> URL: https://issues.apache.org/jira/browse/PHOENIX-3175
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3175.001.patch
>
>
> While looking at some issues reported by [~jpplayer], we found that some of 
> the default Hadoop proxyuser configuration properties in core-site.xml 
> weren't working as intended.
> [~devaraj]'s keen eye noticed that PQS was doing an unnecessary UGI ProxyUser 
> impersonation check.
> We can rely on the SPNEGO authentication as the barrier to use PQS, but then 
> defer to the impersonation check that HBase is already doing as to which PQS 
> instances are allowed to talk to HBase (defined by core-site.xml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3175) Unnecessary UGI proxy user impersonation check

2016-08-16 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15422857#comment-15422857
 ] 

Devaraj Das commented on PHOENIX-3175:
--

Yes [~elserj]. +1

> Unnecessary UGI proxy user impersonation check
> --
>
> Key: PHOENIX-3175
> URL: https://issues.apache.org/jira/browse/PHOENIX-3175
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.9.0, 4.8.1
>
> Attachments: PHOENIX-3175.001.patch
>
>
> While looking at some issues reported by [~jpplayer], we found that some of 
> the default Hadoop proxyuser configuration properties in core-site.xml 
> weren't working as intended.
> [~devaraj]'s keen eye noticed that PQS was doing an unnecessary UGI ProxyUser 
> impersonation check.
> We can rely on the SPNEGO authentication as the barrier to use PQS, but then 
> defer to the impersonation check that HBase is already doing as to which PQS 
> instances are allowed to talk to HBase (defined by core-site.xml).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415853#comment-15415853
 ] 

Devaraj Das commented on PHOENIX-3159:
--

Thanks for the clarification, [~an...@apache.org]. Yeah should be okay to share 
the threadpool... 

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414384#comment-15414384
 ] 

Devaraj Das commented on PHOENIX-3159:
--

Thanks [~an...@apache.org]. Do check if this race is possible:
1. At time t1, the LRU cache evictor thread calls getReferenceCount and that 
returns 0. Context switches to thread-1.
2. At time t1+1, thread-1, does getTable on the same table. But the context 
switches from this thread before it can do incrementReferenceCount.
3. At time t1+2, the LRU cache evictor gets to continue execution, and it goes 
ahead and closes the CachedHTableWrapper instance. Since the getReferenceCount 
is still 0, this would finally invoke the close() on the real table instance.
We end up working with a "closed" htable instance now (wrapped with 
CachedHTableWrapper)... since that is what is going to be returned from 
getTable. If the above can happen, it should be prevented...
Also, remove the reference to workingTables from the patch.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> 

[jira] [Commented] (PHOENIX-3164) PhoenixConnection leak in PQS with security enabled

2016-08-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414202#comment-15414202
 ] 

Devaraj Das commented on PHOENIX-3164:
--

LGTM

> PhoenixConnection leak in PQS with security enabled
> ---
>
> Key: PHOENIX-3164
> URL: https://issues.apache.org/jira/browse/PHOENIX-3164
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3164.001.patch
>
>
> Noticed this one yesterday in some testing. PQS clients were getting stuck in 
> a loop trying to find the location of the hbase:meta region, but never 
> actually finding it despite HBase appearing to be 100% healthy.
> In PQS:
> {noformat}
> "qtp1908490900-20" daemon prio=10 tid=0x7f67284ae800 nid=0x72b8 waiting 
> on condition [0x7f66f570a000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at java.lang.Thread.sleep(Thread.java:340)
>   at java.util.concurrent.TimeUnit.sleep(TimeUnit.java:360)
>   at 
> org.apache.hadoop.hbase.util.RetryCounter.sleepUntilNextRetry(RetryCounter.java:158)
>   at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:373)
>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:622)
>   at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionState(MetaTableLocator.java:491)
>   at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.getMetaRegionLocation(MetaTableLocator.java:172)
>   at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:608)
>   at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:589)
>   at 
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:568)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1192)
>   - locked <0x00070b109930> (a java.lang.Object)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1159)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:300)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:301)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:166)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:161)
>   at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:794)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
>   at 
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:405)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2358)
>   - locked <0x00070b0c4d58> (a 
> org.apache.phoenix.query.ConnectionQueryServicesImpl)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$13.call(ConnectionQueryServicesImpl.java:2327)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2327)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:233)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:202)
>   at java.sql.DriverManager.getConnection(DriverManager.java:571)
>   at java.sql.DriverManager.getConnection(DriverManager.java:187)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.openConnection(JdbcMeta.java:616)
>   - locked <0x00070538c5e0> (a 
> org.apache.calcite.avatica.jdbc.JdbcMeta)
>   at 
> 

[jira] [Created] (PHOENIX-3169) Look (and rationalize) the code differences in the various source branches

2016-08-09 Thread Devaraj Das (JIRA)
Devaraj Das created PHOENIX-3169:


 Summary: Look (and rationalize) the code differences in the 
various source branches
 Key: PHOENIX-3169
 URL: https://issues.apache.org/jira/browse/PHOENIX-3169
 Project: Phoenix
  Issue Type: Task
Affects Versions: 4.8.0
Reporter: Devaraj Das


I was just trying to see if anything differed in the various branches 
maintained for the different hbase versions Phoenix supports. In particular, 
these: 
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.0-HBase-1.2-rc2/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.0-HBase-1.1-rc2/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.0-HBase-1.0-rc2/src/
https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.8.0-HBase-0.98-rc2/src/
I see that there are differences in various files. For example, 
RuleGeneratorTest.java has major differences between the code for the 1.0 and 
1.1 versions. I saw some more but it's easy to get the differences by doing a 
"diff -r" on the source directories (after untarring the tar balls).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3165) System table integrity check and repair tool

2016-08-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15413922#comment-15413922
 ] 

Devaraj Das commented on PHOENIX-3165:
--

Nice one, [~apurtell]. +1 for this.

> System table integrity check and repair tool
> 
>
> Key: PHOENIX-3165
> URL: https://issues.apache.org/jira/browse/PHOENIX-3165
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Priority: Critical
>
> When the Phoenix system tables become corrupt recovery is a painstaking 
> process of low level examination of table contents and manipulation of same 
> with the HBase shell. This is very difficult work providing no margin of 
> safety, and is a critical gap in terms of usability.
> At the OS level, we have fsck.
> At the HDFS level, we have fsck (integrity checking only, though)
> At the HBase level, we have hbck. 
> At the Phoenix level, we lack a system table repair tool. 
> Implement a tool that:
> - Does not depend on the Phoenix client.
> - Supports integrity checking of SYSTEM tables. Check for the existence of 
> all required columns in entries. Check that entries exist for all Phoenix 
> managed tables (implies Phoenix should add supporting advisory-only metadata 
> to the HBase table schemas). Check that serializations are valid. 
> - Supports complete repair of SYSTEM.CATALOG and recreation, if necessary, of 
> other tables like SYSTEM.STATS which can be dropped to recover from an 
> emergency. We should be able to drop SYSTEM.CATALOG (or any other SYSTEM 
> table), run the tool, and have a completely correct recreation of 
> SYSTEM.CATALOG available at the end of its execution.
> - To the extent we have or introduce cross-system-table invariants, check 
> them and offer a repair or reconstruction option.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411400#comment-15411400
 ] 

Devaraj Das commented on PHOENIX-3159:
--

[~an...@apache.org] thanks. I missed one thing though - the operation of 
adding/removing entries in/from the workingTables should be synchronized as 
well on the workingTables. Otherwise, it might race with the contains() check 
(you could have inserted a 'table' in the workingTables just after the 
workingTables.contains() check - the table will be closed although the table 
should remain open).

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Updated] (PHOENIX-3126) The driver implementation should take into account the context of the user

2016-08-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-3126:
-
Assignee: Prabhjyot Singh  (was: Devaraj Das)

> The driver implementation should take into account the context of the user
> --
>
> Key: PHOENIX-3126
> URL: https://issues.apache.org/jira/browse/PHOENIX-3126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Prabhjyot Singh
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3126.2.txt, PHOENIX-3126.txt, .java
>
>
> Ran into this issue ... 
> We have an application that proxies various users internally and fires 
> queries for those users. The Phoenix driver implementation caches connections 
> it successfully creates and keys it by the ConnectionInfo. The ConnectionInfo 
> doesn't take into consideration the "user". So random users (including those 
> that aren't supposed to access) can access the tables in this sort of a setup.
> The fix is to also consider the User in the ConnectionInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3126) The driver implementation should take into account the context of the user

2016-08-02 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-3126:
-
Assignee: Devaraj Das  (was: Prabhjyot Singh)

> The driver implementation should take into account the context of the user
> --
>
> Key: PHOENIX-3126
> URL: https://issues.apache.org/jira/browse/PHOENIX-3126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3126.2.txt, PHOENIX-3126.txt, .java
>
>
> Ran into this issue ... 
> We have an application that proxies various users internally and fires 
> queries for those users. The Phoenix driver implementation caches connections 
> it successfully creates and keys it by the ConnectionInfo. The ConnectionInfo 
> doesn't take into consideration the "user". So random users (including those 
> that aren't supposed to access) can access the tables in this sort of a setup.
> The fix is to also consider the User in the ConnectionInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3126) The driver implementation should take into account the context of the user

2016-08-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15404636#comment-15404636
 ] 

Devaraj Das commented on PHOENIX-3126:
--

Yeah user.getcurrent must never return null unless there is a horrible bug 
somewhere. But I don't mind having a check and logging...

> The driver implementation should take into account the context of the user
> --
>
> Key: PHOENIX-3126
> URL: https://issues.apache.org/jira/browse/PHOENIX-3126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Devaraj Das
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3126.txt, .java
>
>
> Ran into this issue ... 
> We have an application that proxies various users internally and fires 
> queries for those users. The Phoenix driver implementation caches connections 
> it successfully creates and keys it by the ConnectionInfo. The ConnectionInfo 
> doesn't take into consideration the "user". So random users (including those 
> that aren't supposed to access) can access the tables in this sort of a setup.
> The fix is to also consider the User in the ConnectionInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3126) The driver implementation should take into account the context of the user

2016-07-28 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398732#comment-15398732
 ] 

Devaraj Das commented on PHOENIX-3126:
--

[~jamestaylor] no User.getCurrent cannot ever return null... And, this isn't a 
blocker for 4.8.0...

> The driver implementation should take into account the context of the user
> --
>
> Key: PHOENIX-3126
> URL: https://issues.apache.org/jira/browse/PHOENIX-3126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Devaraj Das
> Attachments: PHOENIX-3126.txt, .java
>
>
> Ran into this issue ... 
> We have an application that proxies various users internally and fires 
> queries for those users. The Phoenix driver implementation caches connections 
> it successfully creates and keys it by the ConnectionInfo. The ConnectionInfo 
> doesn't take into consideration the "user". So random users (including those 
> that aren't supposed to access) can access the tables in this sort of a setup.
> The fix is to also consider the User in the ConnectionInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3126) The driver implementation should take into account the context of the user

2016-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-3126:
-
Attachment: .java

[~prabhjyotsingh]'s program that demonstrates the problem. The output should be 
"false true false" but without the patch the output is "false true true".

> The driver implementation should take into account the context of the user
> --
>
> Key: PHOENIX-3126
> URL: https://issues.apache.org/jira/browse/PHOENIX-3126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Devaraj Das
> Attachments: PHOENIX-3126.txt, .java
>
>
> Ran into this issue ... 
> We have an application that proxies various users internally and fires 
> queries for those users. The Phoenix driver implementation caches connections 
> it successfully creates and keys it by the ConnectionInfo. The ConnectionInfo 
> doesn't take into consideration the "user". So random users (including those 
> that aren't supposed to access) can access the tables in this sort of a setup.
> The fix is to also consider the User in the ConnectionInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3126) The driver implementation should take into account the context of the user

2016-07-28 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-3126:
-
Attachment: PHOENIX-3126.txt

Patch demonstrating the fix.

> The driver implementation should take into account the context of the user
> --
>
> Key: PHOENIX-3126
> URL: https://issues.apache.org/jira/browse/PHOENIX-3126
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Devaraj Das
> Attachments: PHOENIX-3126.txt
>
>
> Ran into this issue ... 
> We have an application that proxies various users internally and fires 
> queries for those users. The Phoenix driver implementation caches connections 
> it successfully creates and keys it by the ConnectionInfo. The ConnectionInfo 
> doesn't take into consideration the "user". So random users (including those 
> that aren't supposed to access) can access the tables in this sort of a setup.
> The fix is to also consider the User in the ConnectionInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3126) The driver implementation should take into account the context of the user

2016-07-28 Thread Devaraj Das (JIRA)
Devaraj Das created PHOENIX-3126:


 Summary: The driver implementation should take into account the 
context of the user
 Key: PHOENIX-3126
 URL: https://issues.apache.org/jira/browse/PHOENIX-3126
 Project: Phoenix
  Issue Type: Bug
Reporter: Devaraj Das


Ran into this issue ... 
We have an application that proxies various users internally and fires queries 
for those users. The Phoenix driver implementation caches connections it 
successfully creates and keys it by the ConnectionInfo. The ConnectionInfo 
doesn't take into consideration the "user". So random users (including those 
that aren't supposed to access) can access the tables in this sort of a setup.
The fix is to also consider the User in the ConnectionInfo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-03-04 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15180439#comment-15180439
 ] 

Devaraj Das commented on PHOENIX-2743:
--

Definitely is related .. FYI [~nmaillard]

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2606) Cursor support in Phoenix

2016-02-16 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149481#comment-15149481
 ] 

Devaraj Das commented on PHOENIX-2606:
--

bq. Another option, given PHOENIX-1428 and HBase 0.98.17, is to implement 
cursors by stepping through the rows in the ResultSet.
Just to get this clear, we'd still need to handle aggregate queries (via 
spooling or something), right [~jamestaylor] / [~ankit.singhal]?

> Cursor support in Phoenix
> -
>
> Key: PHOENIX-2606
> URL: https://issues.apache.org/jira/browse/PHOENIX-2606
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Sudarshan Kadambi
>
> Phoenix should look to support a cursor model where the user could set the 
> fetch size to limit the number of rows that are fetched in each batch. Each 
> batch of result rows would be accompanied by a flag indicating if there are 
> more rows to be fetched for a given query or not. 
> The state management for the cursor could be done in the client side or 
> server side (i.e. HBase, not the Query Server). The client side state 
> management could involve capturing the last key in the batch and using that 
> as the start key for the subsequent scan operation. The downside of this 
> model is that if there were any intervening inserts or deletes in the result 
> set of the query, backtracking on the cursor would reflect these additional 
> rows (consider a page down, followed by a page up showing a different set of 
> result rows). Similarly, if the cursor is defined over the results of a join 
> or an aggregation, these operations would need to be performed again when the 
> next batch of result rows are to be fetched. 
> So an alternate approach could be to manage the state server side, wherein 
> there is a query context area in the Regionservers (or, maybe just a 
> temporary table) and the cursor results are fetched from there. This ensures 
> that the cursor has snapshot isolation semantics. I think both models make 
> sense but it might make sense to start with the state management completely 
> on the client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1973) Improve CsvBulkLoadTool performance by moving keyvalue construction from map phase to reduce phase

2016-02-04 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15133647#comment-15133647
 ] 

Devaraj Das commented on PHOENIX-1973:
--

Yeah the numbers look nice. Did you try with map output compression as well 
[~sergey.soldatov]

> Improve CsvBulkLoadTool performance by moving keyvalue construction from map 
> phase to reduce phase
> --
>
> Key: PHOENIX-1973
> URL: https://issues.apache.org/jira/browse/PHOENIX-1973
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Sergey Soldatov
> Fix For: 4.4.1
>
> Attachments: PHOENIX-1973-1.patch, PHOENIX-1973-2.patch
>
>
> It's similar to HBASE-8768. Only thing is we need to write custom mapper and 
> reducer in Phoenix. In Map phase we just need to get row key from primary key 
> columns and write the full text of a line as usual(to ensure sorting). In 
> reducer we need to get actual key values by running upsert query.
> It's basically reduces lot of map output to write to disk and data need to be 
> transferred through network.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2649) GC/OOM during BulkLoad

2016-02-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15129504#comment-15129504
 ] 

Devaraj Das commented on PHOENIX-2649:
--

Any reproducible testcase, @sergeysoldatov ?

> GC/OOM during BulkLoad
> --
>
> Key: PHOENIX-2649
> URL: https://issues.apache.org/jira/browse/PHOENIX-2649
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
> Environment: Mac OS, Hadoop 2.7.2, HBase 1.1.2
>Reporter: Sergey Soldatov
>Priority: Critical
>
> Phoenix fails to complete  bulk load of 40Mb csv data with GC heap error 
> during Reduce phase. The problem is in the comparator for TableRowkeyPair. It 
> expects that the serialized value was written using zero-compressed encoding, 
> but at least in my case it was written in regular way. So, trying to obtain 
> length for table name and row key it always get zero and reports that those 
> byte arrays are equal. As the result, the reducer receives all data produced 
> by mappers in one reduce call and fails with OOM. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2016-01-25 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15116694#comment-15116694
 ] 

Devaraj Das commented on PHOENIX-2221:
--

bq. Disable index table would not simulate index failure is a different issue. 
We can open a different Jira to track that.
[~aliciashu] I think we must find out the cause. Without that we can't commit 
the patch...

> Option to make data regions not writable when index regions are not available
> -
>
> Key: PHOENIX-2221
> URL: https://issues.apache.org/jira/browse/PHOENIX-2221
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Devaraj Das
>Assignee: Alicia Ying Shu
> Fix For: 4.8.0
>
> Attachments: DelegateIndexFailurePolicy.java, PHOENIX-2221-v1.patch, 
> PHOENIX-2221-v2.patch, PHOENIX-2221-v3.patch, PHOENIX-2221-v4.patch, 
> PHOENIX-2221-v5.patch, PHOENIX-2221-v6.patch, PHOENIX-2221.patch, 
> PHOENIX-2221.wip, PHOENIX-2221_v7.patch
>
>
> In one usecase, it was deemed better to not accept writes when the index 
> regions are unavailable for any reason (as opposed to disabling the index and 
> the queries doing bigger data-table scans).
> The idea is that the index regions are kept consistent with the data regions, 
> and when a query runs against the index regions, one can be reasonably sure 
> that the query ran with the most recent data in the data regions. When the 
> index regions are unavailable, the writes to the data table are rejected. 
> Read queries off of the index regions would have deterministic performance 
> (and on the other hand if the index is disabled, then the read queries would 
> have to go to the data regions until the indexes are rebuilt, and the queries 
> would suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1734) Local index improvements

2015-11-24 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15025734#comment-15025734
 ] 

Devaraj Das commented on PHOENIX-1734:
--

Speaking on behalf of [~rajeshbabu] (since I talked to him today morning on 
this, and he is probably sleeping), I believe he has accommodated your asks 
[~jamestaylor]. He hasn't accommodated [~enis]'s asks since not all mutations 
can be handled in the way, [~enis] proposed. We can take a look at that 
atomicity aspect later is what we thought.

> Local index improvements
> 
>
> Key: PHOENIX-1734
> URL: https://issues.apache.org/jira/browse/PHOENIX-1734
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Attachments: PHOENI-1734-WIP.patch, PHOENIX-1734_v1.patch, 
> PHOENIX-1734_v4.patch, TestAtomicLocalIndex.java
>
>
> Local index design considerations: 
>  1. Colocation: We need to co-locate regions of local index regions and data 
> regions. The co-location can be a hard guarantee or a soft (best approach) 
> guarantee. The co-location is a performance requirement, and also maybe 
> needed for consistency(2). Hard co-location means that either both the data 
> region and index region are opened atomically, or neither of them open for 
> serving. 
>  2. Index consistency : Ideally we want the index region and data region to 
> have atomic updates. This means that they should either (a)use transactions, 
> or they should (b)share the same WALEdit and also MVCC for visibility. (b) is 
> only applicable if there is hard colocation guarantee. 
>  3. Local index clients : How the local index will be accessed from clients. 
> In case of the local index being managed in a table, the HBase client can be 
> used for doing scans, etc. If the local index is hidden inside the data 
> regions, there has to be a different mechanism to access the data through the 
> data region. 
> With the above considerations, we imagine three possible implementation for 
> the local index solution, each detailed below. 
> APPROACH 1:  Current approach
> (1) Current approach uses balancer as a soft guarantee. Because of this, in 
> some rare cases, colocation might not happen. 
> (2) The index and data regions do not share the same WALEdits. Meaning 
> consistency cannot be achieved. Also there are two WAL writes per write from 
> client. 
> (3) Regular Hbase client can be used to access index data since index is just 
> another table. 
> APPROACH 2: Shadow regions + shared WAL & MVCC 
> (1) Introduce a shadow regions concept in HBase. Shadow regions are not 
> assigned by AM. Phoenix implements atomic open (and split/merge) of region 
> opening for data regions and index regions so that hard co-location is 
> guaranteed. 
> (2) For consistency requirements, the index regions and data regions will 
> share the same WALEdit (and thus recovery) and they will also share the same 
> MVCC mechanics so that index update and data update is visible atomically. 
> (3) Regular Hbase client can be used to access index data since index is just 
> another table.  
> APPROACH 3: Storing index data in separate column families in the table.
>  (1) Regions will have store files for cfs, which is sorted using the primary 
> sort order. Regions may also maintain stores, sorted in secondary sort 
> orders. This approach is similar in vein how a RDBMS keeps data (a B-TREE in 
> primary sort order and multiple B-TREEs in secondary sort orders with 
> pointers to primary key). That means store the index data in separate column 
> families in the data region. This way a region is extended to be more similar 
> to a RDBMS (but LSM instead of BTree). This is sometimes called shadow cf’s 
> as well. This approach guarantees hard co-location.
>  (2) Since everything is in a single region, they automatically share the 
> same WALEdit and MVCC numbers. Atomicity is easily achieved. 
>  (3) Current Phoenix implementation need to change in such a way that column 
> families selection in read/write path is based data table/index table(logical 
> table in phoenix). 
> I think that APPROACH 3 is the best one for long term, since it does not 
> require to change anything in HBase, mainly we don't need to muck around with 
> the split/merge stuff in HBase. It will be win-win.
> However, APPROACH 2 still needs a “shadow regions” concept to be implemented 
> in HBase itself, and also a way to share WALEdits and MVCCs from multiple 
> regions.
> APPROACH 1 is a good start for local indexes, but I think we are not getting 
> the full benefits for the feature. We can support this for the short term, 
> and decide on the next steps for a longer term implementation. 
> we won't be able to get to implementing it immediately, and want to start a 
> brainstorm.



--
This 

[jira] [Created] (PHOENIX-2221) Option to make data regions not writable when index regions are not available

2015-08-31 Thread Devaraj Das (JIRA)
Devaraj Das created PHOENIX-2221:


 Summary: Option to make data regions not writable when index 
regions are not available
 Key: PHOENIX-2221
 URL: https://issues.apache.org/jira/browse/PHOENIX-2221
 Project: Phoenix
  Issue Type: Improvement
Reporter: Devaraj Das


In one usecase, it was deemed better to not accept writes when the index 
regions are unavailable for any reason (as opposed to disabling the index and 
the queries doing bigger data-table scans).
The idea is that the index regions are kept consistent with the data regions, 
and when a query runs against the index regions, one can be reasonably sure 
that the query ran with the most recent data in the data regions. When the 
index regions are unavailable, the writes to the data table are rejected. Read 
queries off of the index regions would have deterministic performance (and on 
the other hand if the index is disabled, then the read queries would have to go 
to the data regions until the indexes are rebuilt, and the queries would 
suffer).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1982) Documentation for UDF support

2015-05-20 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553392#comment-14553392
 ] 

Devaraj Das commented on PHOENIX-1982:
--

bq. For example, maybe something like The jar containing the UDFs must be 
manually added to HDFS.
And, also, a similar note for the cleanup of UDF jars  the like.

 Documentation for UDF support
 -

 Key: PHOENIX-1982
 URL: https://issues.apache.org/jira/browse/PHOENIX-1982
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.4.0

 Attachments: PHOENIX-1982.patch, PHOENIX-1982_v2.patch, 
 create_function.csv






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-331) Hive Storage

2015-05-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14537178#comment-14537178
 ] 

Devaraj Das commented on PHOENIX-331:
-

[~nmaillard] it'd be good to attach the patch here so that the precommit build 
can run on the patch and catch issues if any.

 Hive Storage
 

 Key: PHOENIX-331
 URL: https://issues.apache.org/jira/browse/PHOENIX-331
 Project: Phoenix
  Issue Type: Task
Reporter: nicolas maillard
Assignee: nicolas maillard
  Labels: enhancement
 Attachments: PHOENIX-331.patch


 I see a pig storage has been added it would be a great feature for a hive one 
 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1939) Test are failing with DoNotRetryIOException: ATABLE: null

2015-04-29 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14520086#comment-14520086
 ] 

Devaraj Das commented on PHOENIX-1939:
--

+1
(although I am not sure why the NPE wouldn't happen in the unit test runs, it 
still seems like a good thing to do).

 Test are failing with DoNotRetryIOException: ATABLE: null
 -

 Key: PHOENIX-1939
 URL: https://issues.apache.org/jira/browse/PHOENIX-1939
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: Phoenix-1939.patch


 Some phoenix tests are failing with the message
 {noformat}
 1) testIsNull[CREATE INDEX ATABLE_IDX ON aTable (a_integer DESC) INCLUDE (
 A_STRING, B_STRING, A_DATE)](org.apache.phoenix.end2end.QueryIT)
 org.apache.phoenix.exception.PhoenixIOException: 
 org.apache.hadoop.hbase.DoNotRetryIOException: ATABLE: null
   at 
 org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:83)
   at 
 org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:812)
   at 
 org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:7771)
   at 
 org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6830)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3415)
   at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3397)
   at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
   at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.NullPointerException
   at org.apache.phoenix.schema.PTableImpl.toProto(PTableImpl.java:934)
   at 
 org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:777)
   ... 10 more
   at 
 org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:938)
   at 
 org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1139)
   at 
 org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:110)
   at 
 org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1527)
   at 
 org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:535)
   at 
 org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:184)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:260)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)
   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:250)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1037)
   at org.apache.phoenix.query.BaseTest.createTestTable(BaseTest.java:737)
   at org.apache.phoenix.query.BaseTest.createTestTable(BaseTest.java:711)
   at 
 org.apache.phoenix.query.BaseTest.ensureTableCreated(BaseTest.java:703)
   at org.apache.phoenix.query.BaseTest.initATableValues(BaseTest.java:894)
   at 
 org.apache.phoenix.query.BaseTest.initATableValues(BaseTest.java:1097)
   at org.apache.phoenix.end2end.BaseQueryIT.initTable(BaseQueryIT.java:94)
   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
   at 
 org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
   at 
 

[jira] [Commented] (PHOENIX-1727) Pherf - Port shell scripts to python

2015-04-27 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14514616#comment-14514616
 ] 

Devaraj Das commented on PHOENIX-1727:
--

Thanks for the mention, [~jamestaylor]. Yes windows is important for us, but we 
may not be able to get to this immediately. [~tdhavle] FYI.

 Pherf - Port shell scripts to python
 

 Key: PHOENIX-1727
 URL: https://issues.apache.org/jira/browse/PHOENIX-1727
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Cody Marcel
Assignee: Cody Marcel
Priority: Minor
  Labels: newbie

 Move the pherf.sh scripts into python scripts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1915) QueryServerBasicsIT should bind QS instance to random port

2015-04-24 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14511707#comment-14511707
 ] 

Devaraj Das commented on PHOENIX-1915:
--

LGTM

 QueryServerBasicsIT should bind QS instance to random port
 --

 Key: PHOENIX-1915
 URL: https://issues.apache.org/jira/browse/PHOENIX-1915
 Project: Phoenix
  Issue Type: Test
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
Priority: Minor
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1915.00.patch


 Make our tests more reliable by choosing a random available port for PQS 
 rather than always binding to the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1904) bin scripts use incorrect variable for locating hbase conf

2015-04-22 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14508242#comment-14508242
 ] 

Devaraj Das commented on PHOENIX-1904:
--

Looks good to me as long as it is tested..

 bin scripts use incorrect variable for locating hbase conf
 --

 Key: PHOENIX-1904
 URL: https://issues.apache.org/jira/browse/PHOENIX-1904
 Project: Phoenix
  Issue Type: Bug
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 5.0.0, 4.4.0

 Attachments: PHOENIX-1904.00.patch, PHOENIX-1904.00.patch, 
 PHOENIX-1904.00.patch, PHOENIX-1904.00.patch


 in bin/phoenix_utils.py, we're building a classpath based on an incorrect 
 environment variable. 'HBASE_CONF_PATH' is used:
 https://github.com/apache/phoenix/blob/master/bin/phoenix_utils.py#L68
  While bigtop is registering 'HBASE_CONF_DIR' for us.
 https://github.com/apache/bigtop/blob/master/bigtop-packages/src/common/hbase/hbase.default#L17
 There's even a local work-around for this problem in end2endTest.py
 https://github.com/apache/phoenix/blob/master/bin/end2endTest.py#L37



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-971) Query server

2015-04-08 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-971:

Attachment: 971-secure-login.patch

[~ndimiduk], this should address the issue of the query server connecting to 
secure HBase clusters. Note that this doesn't address the issue of 
authenticated accesses to the query server itself. The patch is on top of your 
yesterday's patch - 
https://issues.apache.org/jira/secure/attachment/12723770/PHOENIX-971.01.patch. 

 Query server
 

 Key: PHOENIX-971
 URL: https://issues.apache.org/jira/browse/PHOENIX-971
 Project: Phoenix
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Nick Dimiduk
 Fix For: 4.4.0

 Attachments: 971-secure-login.patch, PHOENIX-971.00.patch, 
 PHOENIX-971.01.patch, PHOENIX-971.01.patch, PHOENIX-971.01.patch, image-2.png


 Host the JDBC driver in a query server process that can be deployed as a 
 middle tier between lighter weight clients and Phoenix+HBase. This would 
 serve a similar optional role in Phoenix deployments as the 
 [HiveServer2|https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2]
  does in Hive deploys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-971) Query server

2015-04-08 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-971:

Attachment: 971-secure-login-1.patch

This checks whether security is enabled or not.. (forgot to put that check 
earlier).

 Query server
 

 Key: PHOENIX-971
 URL: https://issues.apache.org/jira/browse/PHOENIX-971
 Project: Phoenix
  Issue Type: New Feature
Reporter: Andrew Purtell
Assignee: Nick Dimiduk
 Fix For: 4.4.0

 Attachments: 971-secure-login-1.patch, 971-secure-login.patch, 
 PHOENIX-971.00.patch, PHOENIX-971.01.patch, PHOENIX-971.01.patch, 
 PHOENIX-971.01.patch, image-2.png


 Host the JDBC driver in a query server process that can be deployed as a 
 middle tier between lighter weight clients and Phoenix+HBase. This would 
 serve a similar optional role in Phoenix deployments as the 
 [HiveServer2|https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2]
  does in Hive deploys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1681) Use the new Region Interface

2015-04-06 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481977#comment-14481977
 ] 

Devaraj Das commented on PHOENIX-1681:
--

[~apurtell], I simply tried to apply the last patch. It applied with a few 
conflicts. We'd like to get this in soon (ideally in the next day or two), and 
if you are working on it, very well. If not, please let me know. Thanks!

 Use the new Region Interface
 

 Key: PHOENIX-1681
 URL: https://issues.apache.org/jira/browse/PHOENIX-1681
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: PHOENIX-1681-4.patch, PHOENIX-1681-4.patch


 HBase is introducing a new interface, Region, a supportable public/evolving 
 subset of HRegion. Use this instead of HRegion in all places where we are 
 using HRegion today



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1681) Use the new Region Interface

2015-04-06 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482445#comment-14482445
 ] 

Devaraj Das commented on PHOENIX-1681:
--

Thanks [~apurtell] for the update. I believe [~enis] has a patch for the 
NextState part. [~enis] could you confirm please..

 Use the new Region Interface
 

 Key: PHOENIX-1681
 URL: https://issues.apache.org/jira/browse/PHOENIX-1681
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: 0001-PHOENIX-1681-Use-the-new-Region-Interface.patch, 
 PHOENIX-1681-4.patch, PHOENIX-1681-4.patch


 HBase is introducing a new interface, Region, a supportable public/evolving 
 subset of HRegion. Use this instead of HRegion in all places where we are 
 using HRegion today



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1810) Support UNION ALL in subquery

2015-04-04 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das resolved PHOENIX-1810.
--
Resolution: Duplicate

 Support UNION ALL in subquery
 -

 Key: PHOENIX-1810
 URL: https://issues.apache.org/jira/browse/PHOENIX-1810
 Project: Phoenix
  Issue Type: Improvement
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1683) Support HBase HA Query(timeline-consistent region replica read)

2015-04-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392223#comment-14392223
 ] 

Devaraj Das commented on PHOENIX-1683:
--

Looks fine to me from the Scan API usage point of view, but someone from 
Phoenix should take a look.

 Support HBase HA Query(timeline-consistent region replica read)
 ---

 Key: PHOENIX-1683
 URL: https://issues.apache.org/jira/browse/PHOENIX-1683
 Project: Phoenix
  Issue Type: New Feature
Reporter: Jeffrey Zhong
Assignee: Rajeshbabu Chintaguntla
 Attachments: PHOENIX-1683.patch, PHOENIX-1683_v2.patch


 As HBASE-10070 is in HBase1.0, we could leverage this feature by providing a 
 new consistency level TIMELINE.
 Assumption: A user has already enabled a hbase table for timeline consistency
 In the connection property or by ALTER SESSION SET CONSISTENCY = 'TIMELINE' 
 statement, we could set current connection/session consistency level to 
 TIMELINE to take the advantage of TIMELINE read. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-02 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14392960#comment-14392960
 ] 

Devaraj Das commented on PHOENIX-1580:
--

Did a quick browse of the last patch. Is the change in the test 
QueryParserTest.java inadvertent?

 Support UNION ALL
 -

 Key: PHOENIX-1580
 URL: https://issues.apache.org/jira/browse/PHOENIX-1580
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
 Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
 Phoenix-1580-v5.patch, Phoenix-1580-v6.patch, phoenix-1580-v1-wipe.patch, 
 phoenix-1580.patch, unionall-wipe.patch


 Select * from T1
 UNION ALL
 Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14391008#comment-14391008
 ] 

Devaraj Das commented on PHOENIX-1580:
--

[~aliciashu], the question is what's the right way of designing/implementing it 
long term that would also address longer term maintainability needs, 
scalability needs, and also extensibility (if in the future, we add more 
sophisticated queries can the current implementation be able to address it 
without a lot of rework). From the commentary so far on this ticket, I think we 
need to step back and take a look once at the current patch where it is lacking 
addressability of feedback and such.

 Support UNION ALL
 -

 Key: PHOENIX-1580
 URL: https://issues.apache.org/jira/browse/PHOENIX-1580
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
 Phoenix-1580-v2.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
 unionall-wipe.patch


 Select * from T1
 UNION ALL
 Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14391684#comment-14391684
 ] 

Devaraj Das commented on PHOENIX-1580:
--

[~aliciashu] please investigate what could be causing the test failures. Both 
[~maryannxue]  [~jamestaylor] are very experienced in the Phoenix codebase - 
so their feedback needs to be really considered.

 Support UNION ALL
 -

 Key: PHOENIX-1580
 URL: https://issues.apache.org/jira/browse/PHOENIX-1580
 Project: Phoenix
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
 Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
 phoenix-1580.patch, unionall-wipe.patch


 Select * from T1
 UNION ALL
 Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1763) Support building with HBase-1.1.0

2015-03-21 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14372850#comment-14372850
 ] 

Devaraj Das commented on PHOENIX-1763:
--

It seems many classes are missing the import for the NextState class... Fails 
to compile.

 Support building with HBase-1.1.0 
 --

 Key: PHOENIX-1763
 URL: https://issues.apache.org/jira/browse/PHOENIX-1763
 Project: Phoenix
  Issue Type: Improvement
Reporter: Enis Soztutar
 Fix For: 5.0.0

 Attachments: phoenix-1763_v1.patch


 HBase-1.1 is in the works. However, due to HBASE-11544 and possibly 
 HBASE-12972 and more, we need some changes for supporting HBase-1.1 even 
 after PHOENIX-1642. 
 We can decide on a plan to support (or not) HBase-1.1 on which branches by 
 the time it comes out. Let's use subtasks to keep progress for build support 
 for 1.1.0-SNAPSHOT. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1642) Make Phoenix Master Branch pointing to HBase1.0.0

2015-03-14 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14362091#comment-14362091
 ] 

Devaraj Das commented on PHOENIX-1642:
--

[~jamestaylor] did a cursory check of the diffs w.r.t what I submitted on 
[~jeffreyz]'s rebased patch. I see you removed BaseTest.java in the patch you 
last submitted. But in Jeffrey's original patch, there was a 
{noformat}
+conn.commit();
{noformat}
which is missing from the BaseTest.java committed in the master branch thus 
far. Just wondering if that is deliberate...

 Make Phoenix Master Branch pointing to HBase1.0.0
 -

 Key: PHOENIX-1642
 URL: https://issues.apache.org/jira/browse/PHOENIX-1642
 Project: Phoenix
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Devaraj Das
 Attachments: 1642-1.txt, 1642-2.txt, 1642-toRemove.patch, 
 1642-toRemove2.txt, PHOENIX-1642.patch


 As HBase1.0.0 will soon be released, the JIRA is to point Phoenix master 
 branch to HBase1.0.0 release. Once we reach consensus,  we could also port 
 the changes into Phoenix 4.0 branch as well which can be done in a separate 
 JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1725) Extract changes applicable to 4.0 branch

2015-03-13 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14360699#comment-14360699
 ] 

Devaraj Das commented on PHOENIX-1725:
--

I committed the patch on the master  4.0 branches. I am not able to resolve 
the issue. [~jamestaylor] can you please add me to the Contributors list on 
Phoenix jira. Thanks!

 Extract changes applicable to 4.0 branch
 

 Key: PHOENIX-1725
 URL: https://issues.apache.org/jira/browse/PHOENIX-1725
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 5.0.0, 4.4

 Attachments: PHOENIX-1725.patch, phoenix-1725-combined.txt, 
 phoenix-1725-time-related-changes.txt


 This JIRA is to move some changes attached in phoenix-1642 patch into 4.0 
 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1725) Extract changes applicable to 4.0 branch

2015-03-13 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das resolved PHOENIX-1725.
--
Resolution: Fixed

 Extract changes applicable to 4.0 branch
 

 Key: PHOENIX-1725
 URL: https://issues.apache.org/jira/browse/PHOENIX-1725
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 5.0.0, 4.4

 Attachments: PHOENIX-1725.patch, phoenix-1725-combined.txt, 
 phoenix-1725-time-related-changes.txt


 This JIRA is to move some changes attached in phoenix-1642 patch into 4.0 
 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1642) Make Phoenix Master Branch pointing to HBase1.0.0

2015-03-13 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-1642:
-
Attachment: 1642-1.txt

This is [~jeffreyz]'s rebased patch.

 Make Phoenix Master Branch pointing to HBase1.0.0
 -

 Key: PHOENIX-1642
 URL: https://issues.apache.org/jira/browse/PHOENIX-1642
 Project: Phoenix
  Issue Type: Bug
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: 1642-1.txt, PHOENIX-1642.patch


 As HBase1.0.0 will soon be released, the JIRA is to point Phoenix master 
 branch to HBase1.0.0 release. Once we reach consensus,  we could also port 
 the changes into Phoenix 4.0 branch as well which can be done in a separate 
 JIRA.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1725) Extract changes applicable to 4.0 branch

2015-03-12 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-1725:
-
Attachment: phoenix-1725-time-related-changes.txt

This patch illustrates the time related changes. So, basically, I copied over 
the EnvironmentEdge* classes from HBase (left the test classes though). This 
helps us keep the existing semantics with very little change in the code 
(mostly 'import' changes). If people agree to this, we need to change 
[~jeffreyz]'s patch to not do the time related changes.

 Extract changes applicable to 4.0 branch
 

 Key: PHOENIX-1725
 URL: https://issues.apache.org/jira/browse/PHOENIX-1725
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 5.0.0, 4.4

 Attachments: PHOENIX-1725.patch, phoenix-1725-time-related-changes.txt


 This JIRA is to move some changes attached in phoenix-1642 patch into 4.0 
 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1725) Extract changes applicable to 4.0 branch

2015-03-12 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14359621#comment-14359621
 ] 

Devaraj Das commented on PHOENIX-1725:
--

bq. One small request: would you mind adding a TimeUtil class with a static 
currentTimeMillis() 
[~jamestaylor] I see a TimeKeeper in the master branch (not checked in 4.0). We 
could use that right?

 Extract changes applicable to 4.0 branch
 

 Key: PHOENIX-1725
 URL: https://issues.apache.org/jira/browse/PHOENIX-1725
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 5.0.0, 4.4

 Attachments: PHOENIX-1725.patch


 This JIRA is to move some changes attached in phoenix-1642 patch into 4.0 
 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1725) Extract changes applicable to 4.0 branch

2015-03-12 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated PHOENIX-1725:
-
Attachment: phoenix-1725-combined.txt

Thanks [~jamestaylor] for looking. Here is the combined patch (mine and 
[~jeffreyz]'s).

 Extract changes applicable to 4.0 branch
 

 Key: PHOENIX-1725
 URL: https://issues.apache.org/jira/browse/PHOENIX-1725
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 5.0.0, 4.4

 Attachments: PHOENIX-1725.patch, phoenix-1725-combined.txt, 
 phoenix-1725-time-related-changes.txt


 This JIRA is to move some changes attached in phoenix-1642 patch into 4.0 
 branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)