[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411429#comment-15411429
 ] 

Hadoop QA commented on PHOENIX-3159:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12822512/PHOENIX-3159.patch
  against master branch at commit ba82b1cb5a14c2cf109deb8a862389142d92f541.
  ATTACHMENT ID: 12822512

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+synchronized (workingTables) { //the whole operation of closing 
and removing the entry should be atomic
+  private List workingTables = 
Collections.synchronizedList(new ArrayList());
+//set. The eviction will not really close the underlying table until 
all the instances are cleared

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/499//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/499//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/499//console

This message is automatically generated.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> 

[jira] [Commented] (PHOENIX-3128) Remove extraneous operations during upsert with local immutable index

2016-08-08 Thread Junegunn Choi (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411613#comment-15411613
 ] 

Junegunn Choi commented on PHOENIX-3128:


Thanks, I can confirm that the deletes and duplicate writes are gone now. Still 
seeing the same number of scans though for non-transactional case, is it 
expected?

> Remove extraneous operations during upsert with local immutable index
> -
>
> Key: PHOENIX-3128
> URL: https://issues.apache.org/jira/browse/PHOENIX-3128
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
> Fix For: 4.8.0
>
> Attachments: PHOENIX-3128.patch, PHOENIX-3128_v2.patch, 
> PHOENIX-3128_v3.patch, PHOENIX-3128_v4.patch, PHOENIX-3128_v5.patch, 
> PHOENIX-3128_v6.patch, PHOENIX-3128_v7.patch, PHOENIX-3128_v8.patch, 
> PHOENIX-3128_wip.patch
>
>
> Upsert to a table with a local immutable index is supposed to be more 
> efficient than to a table with a local mutable index, but it's actually 
> slower (in our environment by 30%) due to extraneous operations involved.
> The problem is twofold:
> 1. Client unnecessarily prepares and sends index update.
> 2. Index cleanup is done regardless of the immutability of the table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3159:
---
Attachment: PHOENIX-3159_v1.patch

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size). 
> But still we should handle the closing of working HTables to avoid index 
> write failures (which in turn disables inde

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411672#comment-15411672
 ] 

Ankit Singhal commented on PHOENIX-3159:


Thanks [~devaraj] for review and pointing out the problem.
As you asked earlier, I have implemented the same with reference count as I was 
not using instance in working tables any time. Hopefully, now the attached 
patch handles the consistency properly properly.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(Threa

[jira] [Updated] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3159:
---
Attachment: (was: PHOENIX-3159_v1.patch)

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size). 
> But still we should handle the closing of working HTables to avoid index 
> write failures (which in turn di

[jira] [Updated] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3159:
---
Attachment: PHOENIX-3159_v1.patch

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size). 
> But still we should handle the closing of working HTables to avoid index 
> write failures (which in turn disables inde

[jira] [Comment Edited] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411672#comment-15411672
 ] 

Ankit Singhal edited comment on PHOENIX-3159 at 8/8/16 11:17 AM:
-

Thanks [~devaraj] for review and pointing out the problem.
As you asked earlier, I have now implemented the same with reference count as I 
was also not using instance in working tables list any time. Hopefully, now the 
attached patch handles the consistency properly.


was (Author: an...@apache.org):
Thanks [~devaraj] for review and pointing out the problem.
As you asked earlier, I have implemented the same with reference count as I was 
not using instance in working tables any time. Hopefully, now the attached 
patch handles the consistency properly properly.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411739#comment-15411739
 ] 

Hadoop QA commented on PHOENIX-3159:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12822549/PHOENIX-3159_v1.patch
  against master branch at commit ba82b1cb5a14c2cf109deb8a862389142d92f541.
  ATTACHMENT ID: 12822549

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ synchronized (this) { // the whole operation of closing and 
checking the count should be atomic
+  private List workingTables = 
Collections.synchronizedList(new ArrayList());

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.ambari.server.controller.internal.FeedResourceProviderTest.testUpdateResources(FeedResourceProviderTest.java:131)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//console

This message is automatically generated.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"

[jira] [Resolved] (PHOENIX-1502) Tests for Indexer won't compile after HBASE-12522

2016-08-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved PHOENIX-1502.
--
Resolution: Not A Problem
  Assignee: (was: Sean Busbey)

> Tests for Indexer won't compile after HBASE-12522
> -
>
> Key: PHOENIX-1502
> URL: https://issues.apache.org/jira/browse/PHOENIX-1502
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Sean Busbey
>
> Currently, HBase branch-1 includes HBASE-12522. That patch does a major 
> refactoring of WAL handling.
> At first blush, it doesn't impact any of Phoenix's runtime code, but it does 
> cause a compilation failure in the test code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3162) TableNotFoundException might be thrown when an index dropped while upserting.

2016-08-08 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-3162:


 Summary: TableNotFoundException might be thrown when an index 
dropped while upserting.
 Key: PHOENIX-3162
 URL: https://issues.apache.org/jira/browse/PHOENIX-3162
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 4.8.1


If a table has mix of global and local indexes and one of them is dropped while 
upserting data then there is a chance that the query might fail with 
TableNotFoundException. Usually when an index dropped we skip writing to the 
dropped index on failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-3162) TableNotFoundException might be thrown when an index dropped while upserting.

2016-08-08 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3162:
-
Attachment: PHOENIX-3162.patch

Here is the small fix.  Changed test to verify drop index while running upsert. 
[~jamestaylor] Please review.

> TableNotFoundException might be thrown when an index dropped while upserting.
> -
>
> Key: PHOENIX-3162
> URL: https://issues.apache.org/jira/browse/PHOENIX-3162
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3162.patch
>
>
> If a table has mix of global and local indexes and one of them is dropped 
> while upserting data then there is a chance that the query might fail with 
> TableNotFoundException. Usually when an index dropped we skip writing to the 
> dropped index on failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-3162) TableNotFoundException might be thrown when an index dropped while upserting.

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411944#comment-15411944
 ] 

Hadoop QA commented on PHOENIX-3162:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12822587/PHOENIX-3162.patch
  against master branch at commit ba82b1cb5a14c2cf109deb8a862389142d92f541.
  ATTACHMENT ID: 12822587

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/501//console

This message is automatically generated.

> TableNotFoundException might be thrown when an index dropped while upserting.
> -
>
> Key: PHOENIX-3162
> URL: https://issues.apache.org/jira/browse/PHOENIX-3162
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3162.patch
>
>
> If a table has mix of global and local indexes and one of them is dropped 
> while upserting data then there is a chance that the query might fail with 
> TableNotFoundException. Usually when an index dropped we skip writing to the 
> dropped index on failure.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2208) Navigation to trace information in tracing UI should be driven off of query instead of trace ID

2016-08-08 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-2208:
--
Attachment: tracetimeline.png
traceduration.png
tracedistributionbycount.png
tracebyhostname.png

Hi,

The tracing timeline with custom tooltip is attached. The trace distribution is 
visualized by hostname, by trace count and by durations.
 

> Navigation to trace information in tracing UI should be driven off of query 
> instead of trace ID
> ---
>
> Key: PHOENIX-2208
> URL: https://issues.apache.org/jira/browse/PHOENIX-2208
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Nishani 
> Attachments: Query-builder.png, Screenshot from 2016-06-21 
> 18-03-02.png, tracebyhostname.png, tracedistributionbycount.png, 
> traceduration.png, tracetimeline.png
>
>
> Instead of driving the trace UI based on the trace ID, we should drive it off 
> of the query string. Something like a drop down list that shows the query 
> string of the last N queries which can be selected from, with a search box 
> for a regex query string and perhaps time range that would search for the 
> trace ID under the covers. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #189: PHOENIX-3036 Modify phoenix IT tests to extend Ba...

2016-08-08 Thread samarthjain
Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/189#discussion_r73924614
  
--- Diff: phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
---
@@ -2063,5 +2080,35 @@ protected static void 
populateMultiCFTestTable(String tableName, Date date) thro
 } finally {
 conn.close();
 }
-}  
+}
+
+protected static void verifySequence(String tenantID, String 
sequenceName, String sequenceSchemaName, boolean exists) throws SQLException {
+
+PhoenixConnection phxConn = 
DriverManager.getConnection(getUrl()).unwrap(PhoenixConnection.class);
+String ddl = "SELECT "
++ PhoenixDatabaseMetaData.TENANT_ID + ","
++ PhoenixDatabaseMetaData.SEQUENCE_SCHEMA + ","
++ PhoenixDatabaseMetaData.SEQUENCE_NAME
++ " FROM " + PhoenixDatabaseMetaData.SYSTEM_SEQUENCE
++ " WHERE ";
+
+ddl += " TENANT_ID  " + ((tenantID == null ) ? "IS NULL " : " = '" 
+ tenantID + "'");
+ddl += " AND SEQUENCE_NAME " + ((sequenceName == null) ? "IS NULL 
" : " = '" +  sequenceName + "'");
+ddl += " AND SEQUENCE_SCHEMA " + ((sequenceSchemaName == null) ? 
"IS NULL " : " = '" + sequenceSchemaName + "'" );
+
+ResultSet rs = phxConn.createStatement().executeQuery(ddl);
+//boolean res =
+while(rs.next()){
+String ten = rs.getString("TENANT_ID");
+String seqN = rs.getString("SEQUENCE_SCHEMA");
+String seqaN = rs.getString("SEQUENCE_NAME");
+String seqNam = rs.getString("SEQUENCE_SCHEMA");
+}
+/*if(exists) {
--- End diff --

This shouldn't be commented out, I think.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3036) Modify phoenix IT tests to extend BaseHBaseManagedTimeTableReuseIT

2016-08-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412192#comment-15412192
 ] 

ASF GitHub Bot commented on PHOENIX-3036:
-

Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/189#discussion_r73924614
  
--- Diff: phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
---
@@ -2063,5 +2080,35 @@ protected static void 
populateMultiCFTestTable(String tableName, Date date) thro
 } finally {
 conn.close();
 }
-}  
+}
+
+protected static void verifySequence(String tenantID, String 
sequenceName, String sequenceSchemaName, boolean exists) throws SQLException {
+
+PhoenixConnection phxConn = 
DriverManager.getConnection(getUrl()).unwrap(PhoenixConnection.class);
+String ddl = "SELECT "
++ PhoenixDatabaseMetaData.TENANT_ID + ","
++ PhoenixDatabaseMetaData.SEQUENCE_SCHEMA + ","
++ PhoenixDatabaseMetaData.SEQUENCE_NAME
++ " FROM " + PhoenixDatabaseMetaData.SYSTEM_SEQUENCE
++ " WHERE ";
+
+ddl += " TENANT_ID  " + ((tenantID == null ) ? "IS NULL " : " = '" 
+ tenantID + "'");
+ddl += " AND SEQUENCE_NAME " + ((sequenceName == null) ? "IS NULL 
" : " = '" +  sequenceName + "'");
+ddl += " AND SEQUENCE_SCHEMA " + ((sequenceSchemaName == null) ? 
"IS NULL " : " = '" + sequenceSchemaName + "'" );
+
+ResultSet rs = phxConn.createStatement().executeQuery(ddl);
+//boolean res =
+while(rs.next()){
+String ten = rs.getString("TENANT_ID");
+String seqN = rs.getString("SEQUENCE_SCHEMA");
+String seqaN = rs.getString("SEQUENCE_NAME");
+String seqNam = rs.getString("SEQUENCE_SCHEMA");
+}
+/*if(exists) {
--- End diff --

This shouldn't be commented out, I think.


> Modify phoenix IT tests to extend BaseHBaseManagedTimeTableReuseIT
> --
>
> Key: PHOENIX-3036
> URL: https://issues.apache.org/jira/browse/PHOENIX-3036
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Samarth Jain
>Assignee: prakul agarwal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3036.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #189: PHOENIX-3036 Modify phoenix IT tests to extend Ba...

2016-08-08 Thread samarthjain
Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/189#discussion_r73924704
  
--- Diff: phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
---
@@ -2063,5 +2080,35 @@ protected static void 
populateMultiCFTestTable(String tableName, Date date) thro
 } finally {
 conn.close();
 }
-}  
+}
+
+protected static void verifySequence(String tenantID, String 
sequenceName, String sequenceSchemaName, boolean exists) throws SQLException {
+
+PhoenixConnection phxConn = 
DriverManager.getConnection(getUrl()).unwrap(PhoenixConnection.class);
+String ddl = "SELECT "
++ PhoenixDatabaseMetaData.TENANT_ID + ","
++ PhoenixDatabaseMetaData.SEQUENCE_SCHEMA + ","
++ PhoenixDatabaseMetaData.SEQUENCE_NAME
++ " FROM " + PhoenixDatabaseMetaData.SYSTEM_SEQUENCE
++ " WHERE ";
+
+ddl += " TENANT_ID  " + ((tenantID == null ) ? "IS NULL " : " = '" 
+ tenantID + "'");
+ddl += " AND SEQUENCE_NAME " + ((sequenceName == null) ? "IS NULL 
" : " = '" +  sequenceName + "'");
+ddl += " AND SEQUENCE_SCHEMA " + ((sequenceSchemaName == null) ? 
"IS NULL " : " = '" + sequenceSchemaName + "'" );
+
+ResultSet rs = phxConn.createStatement().executeQuery(ddl);
+//boolean res =
--- End diff --

Remove commented code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3036) Modify phoenix IT tests to extend BaseHBaseManagedTimeTableReuseIT

2016-08-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412193#comment-15412193
 ] 

ASF GitHub Bot commented on PHOENIX-3036:
-

Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/189#discussion_r73924704
  
--- Diff: phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
---
@@ -2063,5 +2080,35 @@ protected static void 
populateMultiCFTestTable(String tableName, Date date) thro
 } finally {
 conn.close();
 }
-}  
+}
+
+protected static void verifySequence(String tenantID, String 
sequenceName, String sequenceSchemaName, boolean exists) throws SQLException {
+
+PhoenixConnection phxConn = 
DriverManager.getConnection(getUrl()).unwrap(PhoenixConnection.class);
+String ddl = "SELECT "
++ PhoenixDatabaseMetaData.TENANT_ID + ","
++ PhoenixDatabaseMetaData.SEQUENCE_SCHEMA + ","
++ PhoenixDatabaseMetaData.SEQUENCE_NAME
++ " FROM " + PhoenixDatabaseMetaData.SYSTEM_SEQUENCE
++ " WHERE ";
+
+ddl += " TENANT_ID  " + ((tenantID == null ) ? "IS NULL " : " = '" 
+ tenantID + "'");
+ddl += " AND SEQUENCE_NAME " + ((sequenceName == null) ? "IS NULL 
" : " = '" +  sequenceName + "'");
+ddl += " AND SEQUENCE_SCHEMA " + ((sequenceSchemaName == null) ? 
"IS NULL " : " = '" + sequenceSchemaName + "'" );
+
+ResultSet rs = phxConn.createStatement().executeQuery(ddl);
+//boolean res =
--- End diff --

Remove commented code.


> Modify phoenix IT tests to extend BaseHBaseManagedTimeTableReuseIT
> --
>
> Key: PHOENIX-3036
> URL: https://issues.apache.org/jira/browse/PHOENIX-3036
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Samarth Jain
>Assignee: prakul agarwal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3036.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request #189: PHOENIX-3036 Modify phoenix IT tests to extend Ba...

2016-08-08 Thread samarthjain
Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/189#discussion_r73924877
  
--- Diff: pom.xml ---
@@ -226,6 +226,7 @@
   maven-failsafe-plugin
   ${maven-failsafe-plugin.version}
   
+
--- End diff --

Revert changes to this file since it has only white space changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-3036) Modify phoenix IT tests to extend BaseHBaseManagedTimeTableReuseIT

2016-08-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412195#comment-15412195
 ] 

ASF GitHub Bot commented on PHOENIX-3036:
-

Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/189#discussion_r73924877
  
--- Diff: pom.xml ---
@@ -226,6 +226,7 @@
   maven-failsafe-plugin
   ${maven-failsafe-plugin.version}
   
+
--- End diff --

Revert changes to this file since it has only white space changes.


> Modify phoenix IT tests to extend BaseHBaseManagedTimeTableReuseIT
> --
>
> Key: PHOENIX-3036
> URL: https://issues.apache.org/jira/browse/PHOENIX-3036
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Samarth Jain
>Assignee: prakul agarwal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3036.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-258) Use skip scan when SELECT DISTINCT on leading row key column(s)

2016-08-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412436#comment-15412436
 ] 

Lars Hofhansl commented on PHOENIX-258:
---

That does not at all match my experience. It only needs to do 2 (perhaps 3) 
seeks per chunk. So at most 234 seeks, which on top happen in parallel. That 
should take a few *milli* seconds.

OK... Further it needs to seek each HFile/Memstore. Was the table just seeded? 
(Even then I'd expect maybe a few thousand seeks, and still in parallel)

If there are many deleted rows, that might over shadow the savings here, but I 
doubt that's the case.

How long does {{SELECT /*+ RANGE_SCAN */ DISTINCT ORGANIZATION_ID FROM T}} take?


> Use skip scan when SELECT DISTINCT on leading row key column(s)
> ---
>
> Key: PHOENIX-258
> URL: https://issues.apache.org/jira/browse/PHOENIX-258
> Project: Phoenix
>  Issue Type: Task
>Reporter: ryang-sfdc
>Assignee: Lars Hofhansl
> Fix For: 4.8.0
>
> Attachments: 258-WIP.txt, 258-v1.txt, 258-v10.txt, 258-v11.txt, 
> 258-v12.txt, 258-v13.txt, 258-v14.txt, 258-v15.txt, 258-v16.txt, 258-v17.txt, 
> 258-v2.txt, 258-v3.txt, 258-v4.txt, 258-v5.txt, 258-v6.txt, 258-v7.txt, 
> 258-v8.txt, 258-v9.txt, 258.txt, DistinctFixedPrefixFilter.java, in-clause.png
>
>
> create table(a varchar(32) not null, date date not null constraint pk primary 
> key(a,date))
> [["PLAN"],["CLIENT PARALLEL 94-WAY FULL SCAN OVER foo"],["SERVER 
> AGGREGATE INTO ORDERED DISTINCT ROWS BY [a]"],["CLIENT MERGE SORT"]]  
>
> We should skip scan.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2995) Write performance severely degrades with large number of views

2016-08-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412589#comment-15412589
 ] 

James Taylor commented on PHOENIX-2995:
---

- Is this thread safe? What if the count changes between the get and 
decrementAndGet()?
{code}
+int count = this.referenceCount.get();
+if (count>0) {
+this.referenceCount.decrementAndGet();
+return new PMetaDataCache(this);
+}
+else {
+return this;
+}
{code}
- This can be improved by not doing multiple gets/containsKey calls. Instead 
always just do a single put and look at the return value. If non null, just 
combine the old value and new value (making sure new values appear last).
{code}
+TableInfo tableInfo = new TableInfo(isDataTable, 
hTableName, tableRef);
+if (!physicalTableMutationMap.containsKey(tableInfo)) {
+physicalTableMutationMap.put(tableInfo, 
Lists.newArrayList());
+}
 isDataTable = false;
-}
-if (tableRef.getTable().getType() != PTableType.INDEX) {
-numRows -= valuesMap.size();
+
physicalTableMutationMap.get(tableInfo).addAll(mutationList);
{code}
- Perhaps another change you've made circumvents this, but I don't think we can 
always do a mutations.remove() here as we may get a concurrent modification 
exception (see previous code). Not positive if we're using numRows anywhere (we 
had a check before about that I believe). If not using, I suppose we can remove.
{code}
+if (tableInfo.isDataTable()) {
+numRows -= numMutations;
+}
+// Remove batches as we process them
+mutations.remove(origTableRef);
{code}
- Overall the data structures can be greatly improved here in MutationState. We 
don't need to use these maps (both at the top level and within the map), but 
instead can just use arrays. We end up dumping everything into a Map eventually 
for HBase, so rows at the end would naturally overwrite the earlier rows. I 
believe I have a separate JIRA for this, but if not I'll file one.

> Write performance severely degrades with large number of views 
> ---
>
> Key: PHOENIX-2995
> URL: https://issues.apache.org/jira/browse/PHOENIX-2995
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
>  Labels: Argus
> Fix For: 4.8.1
>
> Attachments: PHOENIX-2995-v2.patch, PHOENIX-2995.patch, 
> create_view_and_upsert.png, image.png, image2.png, image3.png, upsert_rate.png
>
>
> Write performance for each 1K batch degrades significantly when there are 
> *10K* views being written in random with default 
> {{phoenix.client.maxMetaDataCacheSize}}. With all views created, upsert rate 
> remains around 25 seconds per 1K batch i.e. ~2K rows/min upsert rate. 
> When {{phoenix.client.maxMetaDataCacheSize}} is increased to 100MB+ then view 
> does not need to get re-resolved and upsert rate gets back to normal ~60K 
> rows/min.
> With *100K* views and {{phoenix.client.maxMetaDataCacheSize}} set to 1GB, I 
> wasn't able create all 100K views as upsert time for each 1K batch keeps on 
> steadily increasing. 
> Following graph shows 1K batch upsert rate over time with variation of number 
> of views. Rows are upserted to random views {{CREATE VIEW IF NOT EXISTS ... 
> APPEND_ONLY_SCHEMA = true, UPDATE_CACHE_FREQUENCY=90}} is executed before 
> upsert statement.
> !upsert_rate.png!
> Base table is also created with {{APPEND_ONLY_SCHEMA = true, 
> UPDATE_CACHE_FREQUENCY = 90, AUTO_PARTITION_SEQ}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3163) Split during global index creation may cause ERROR 201 error.

2016-08-08 Thread Sergey Soldatov (JIRA)
Sergey Soldatov created PHOENIX-3163:


 Summary: Split during global index creation may cause ERROR 201 
error.
 Key: PHOENIX-3163
 URL: https://issues.apache.org/jira/browse/PHOENIX-3163
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.8.0
Reporter: Sergey Soldatov
Assignee: Sergey Soldatov


When we create global index and split happen meanwhile there is a chance to 
fail with ERROR 201:
{noformat}
2016-08-08 15:55:17,248 INFO  [Thread-6] 
org.apache.phoenix.iterate.BaseResultIterators(878): Failed to execute task 
during cancel
java.util.concurrent.ExecutionException: java.sql.SQLException: ERROR 201 
(22000): Illegal data.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:872)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:809)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:713)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at 
org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:815)
at 
org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
at 
org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:124)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:2823)
at 
org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1079)
at 
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1382)
at 
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:330)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440)
at 
org.apache.phoenix.hbase.index.write.TestIndexWriter$1.run(TestIndexWriter.java:93)
Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data.
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:441)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:287)
at 
org.apache.phoenix.schema.types.PUnsignedSmallint$UnsignedShortCodec.decodeShort(PUnsignedSmallint.java:146)
at 
org.apache.phoenix.schema.types.PSmallint.toObject(PSmallint.java:104)
at org.apache.phoenix.schema.types.PSmallint.toObject(PSmallint.java:28)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:980)
at 
org.apache.phoenix.schema.types.PUnsignedSmallint.toObject(PUnsignedSmallint.java:102)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:980)
at 
org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:992)
at 
org.apache.phoenix.schema.types.PDataType.coerceBytes(PDataType.java:830)
at 
org.apache.phoenix.schema.types.PDecimal.coerceBytes(PDecimal.java:342)
at 
org.apache.phoenix.schema.types.PDataType.coerceBytes(PDataType.java:810)
at 
org.apache.phoenix.expression.CoerceExpression.evaluate(CoerceExpression.java:149)
at 
org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.getBytes(PhoenixResultSet.java:308)
at 
org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:197)
at 
org.apache.phoenix.compile.UpsertCompiler.access$000(UpsertCompiler.java:115)
at 
org.apache.phoenix.compile.UpsertCompiler$UpsertingParallelIteratorFactory.mutate(UpsertCompiler.java:259)
at 
org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:112)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:103)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at 
java.util.concurrent.ThreadPoolExec

[jira] [Updated] (PHOENIX-2944) DATE Comparison Broken

2016-08-08 Thread Saurabh Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Saurabh Seth updated PHOENIX-2944:
--
Attachment: PHOENIX-2944.patch

I want to contribute to Phoenix and took a look at this issue just to get 
started. The issue here is with the compareTo methods in PDate and PTimestamp 
classes which are used during literal value comparisons.

I am attaching a patch with the fix and a few additional unit tests.

> DATE Comparison Broken
> --
>
> Key: PHOENIX-2944
> URL: https://issues.apache.org/jira/browse/PHOENIX-2944
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Aaron Stephens
>Priority: Critical
>  Labels: Phoenix
> Attachments: PHOENIX-2944.patch
>
>
> It appears that comparisons involving the DATE type are broken.  See examples 
> below:
> {noformat}
> > select DATE '2016-05-10 00:00:00' > DATE '2016-05-11 00:00:00'; 
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> 1 row selected (0.001 seconds)
> > select TIMESTAMP '2016-05-10 00:00:00' > DATE '2016-05-11 00:00:00';
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> 1 row selected (0.001 seconds)
> > select DATE '2016-05-10 00:00:00' > TIMESTAMP '2016-05-11 00:00:00';
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+
> 1 row selected (0.001 seconds)
> > select TIMESTAMP '2016-05-10 00:00:00' > TIMESTAMP '2016-05-11 00:00:00';   
> > 
> ++
>  
> | false  |
>  
> ++
>  
> | false  |
>  
> ++
>  
> 1 row selected (0.001 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2944) DATE Comparison Broken

2016-08-08 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2944:
---
Assignee: Saurabh Seth

> DATE Comparison Broken
> --
>
> Key: PHOENIX-2944
> URL: https://issues.apache.org/jira/browse/PHOENIX-2944
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Aaron Stephens
>Assignee: Saurabh Seth
>Priority: Critical
>  Labels: Phoenix
> Fix For: 4.8.1
>
> Attachments: PHOENIX-2944.patch
>
>
> It appears that comparisons involving the DATE type are broken.  See examples 
> below:
> {noformat}
> > select DATE '2016-05-10 00:00:00' > DATE '2016-05-11 00:00:00'; 
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> 1 row selected (0.001 seconds)
> > select TIMESTAMP '2016-05-10 00:00:00' > DATE '2016-05-11 00:00:00';
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> 1 row selected (0.001 seconds)
> > select DATE '2016-05-10 00:00:00' > TIMESTAMP '2016-05-11 00:00:00';
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+
> 1 row selected (0.001 seconds)
> > select TIMESTAMP '2016-05-10 00:00:00' > TIMESTAMP '2016-05-11 00:00:00';   
> > 
> ++
>  
> | false  |
>  
> ++
>  
> | false  |
>  
> ++
>  
> 1 row selected (0.001 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2944) DATE Comparison Broken

2016-08-08 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413001#comment-15413001
 ] 

Ankit Singhal commented on PHOENIX-2944:


+1, Thanks [~saurabh.s...@gmail.com] for the patch. It looks good.
I have added you as a contributor as well so that you can assign the JIRA to 
yourself if you are working on it.

> DATE Comparison Broken
> --
>
> Key: PHOENIX-2944
> URL: https://issues.apache.org/jira/browse/PHOENIX-2944
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Aaron Stephens
>Priority: Critical
>  Labels: Phoenix
> Fix For: 4.8.1
>
> Attachments: PHOENIX-2944.patch
>
>
> It appears that comparisons involving the DATE type are broken.  See examples 
> below:
> {noformat}
> > select DATE '2016-05-10 00:00:00' > DATE '2016-05-11 00:00:00'; 
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> 1 row selected (0.001 seconds)
> > select TIMESTAMP '2016-05-10 00:00:00' > DATE '2016-05-11 00:00:00';
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> 1 row selected (0.001 seconds)
> > select DATE '2016-05-10 00:00:00' > TIMESTAMP '2016-05-11 00:00:00';
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+
> 1 row selected (0.001 seconds)
> > select TIMESTAMP '2016-05-10 00:00:00' > TIMESTAMP '2016-05-11 00:00:00';   
> > 
> ++
>  
> | false  |
>  
> ++
>  
> | false  |
>  
> ++
>  
> 1 row selected (0.001 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2944) DATE Comparison Broken

2016-08-08 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413002#comment-15413002
 ] 

Ankit Singhal commented on PHOENIX-2944:


I'll commit this after QA bot gives a clean run.

> DATE Comparison Broken
> --
>
> Key: PHOENIX-2944
> URL: https://issues.apache.org/jira/browse/PHOENIX-2944
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Aaron Stephens
>Assignee: Saurabh Seth
>Priority: Critical
>  Labels: Phoenix
> Fix For: 4.8.1
>
> Attachments: PHOENIX-2944.patch
>
>
> It appears that comparisons involving the DATE type are broken.  See examples 
> below:
> {noformat}
> > select DATE '2016-05-10 00:00:00' > DATE '2016-05-11 00:00:00'; 
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> 1 row selected (0.001 seconds)
> > select TIMESTAMP '2016-05-10 00:00:00' > DATE '2016-05-11 00:00:00';
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> 1 row selected (0.001 seconds)
> > select DATE '2016-05-10 00:00:00' > TIMESTAMP '2016-05-11 00:00:00';
> > 
> +---+ 
>  
> | true  | 
>  
> +---+ 
>  
> | true  | 
>  
> +---+
> 1 row selected (0.001 seconds)
> > select TIMESTAMP '2016-05-10 00:00:00' > TIMESTAMP '2016-05-11 00:00:00';   
> > 
> ++
>  
> | false  |
>  
> ++
>  
> | false  |
>  
> ++
>  
> 1 row selected (0.001 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)