[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-10-03 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15542601#comment-15542601
 ] 

Hudson commented on PHOENIX-3159:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1428 (See 
[https://builds.apache.org/job/Phoenix-master/1428/])
PHOENIX-3159 CachingHTableFactory may close HTable during eviction even 
(ankitsinghal59: rev 4c0aeb0d530852bc12e5fcd930e336fb19434397)
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleWriterIndexCommitter.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/recovery/TrackingParallelWriterIndexCommitter.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestCachingHTableFactory.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CachingHTableFactory.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/HTableFactory.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/FakeTableFactory.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/table/CoprocessorHTableFactory.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriterUtils.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/ParallelWriterIndexCommitter.java


> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0, 4.8.2
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch, PHOENIX-3159_v4.patch, 
> PHOENIX-3159_v5.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-28 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531520#comment-15531520
 ] 

Devaraj Das commented on PHOENIX-3159:
--

+1 

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch, PHOENIX-3159_v4.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size). 
> But still we should handle the closing of 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-19 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504687#comment-15504687
 ] 

James Taylor commented on PHOENIX-3159:
---

No problem, [~devaraj]. Thanks for the thorough review. I think it's fine to 
push this out to 4.9.0 and/or 4.8.2 as those will happen soon after 4.8.1.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-19 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15504316#comment-15504316
 ] 

Devaraj Das commented on PHOENIX-3159:
--

Sorry [~jamestaylor] for the delay.. Hopefully if [~an...@apache.org] can 
answer the questions quickly, we can still get this in in 4.8.1
1. Shouldn't the "pool" in CachingHTableFactory not be shut down if the pool 
was initially obtained via getTable(tableName, pool)?
2. For the case you create the pool, should the maxThreads be simply 
Integer.MAX_VALUE? Assuming that *all* table accesses go through this 
CachingHTableFactory, it should be okay (no leaks and such)? Saying this 
because for the case the pool is passed from outside, it seems it gets created 
with Integer.MAX_VALUE.
3. After the pool is shutdown, wondering if you should be doing a 
pool.awaitTermination() or else the threads will be killed if the JVM is 
exiting (this should be verified though; I am recalling this from memory). 
Wondering if this (threads did partial work and then exited) actually matters 
in practice or not for Phoenix.


> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.9.0
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-16 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496617#comment-15496617
 ] 

Devaraj Das commented on PHOENIX-3159:
--

Will get to it today or over the weekend James

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size). 
> But still we should 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-16 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15496599#comment-15496599
 ] 

James Taylor commented on PHOENIX-3159:
---

[~devaraj] - would it be possible to give this a final review as we'd like to 
get this into the 4.8.1 RC we plan to cut on Monday?

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-09-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15480377#comment-15480377
 ] 

James Taylor commented on PHOENIX-3159:
---

Is this ready to go, [~an...@apache.org]? If so, please check in to 4.8 and 4.x 
branches.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15424082#comment-15424082
 ] 

Hadoop QA commented on PHOENIX-3159:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12824096/PHOENIX-3159_v3.patch
  against master branch at commit 657917bfb15ecd5bef4616458f2c8cd3ede0cf6b.
  ATTACHMENT ID: 12824096

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 15 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ synchronized (this) { // the whole operation of closing and 
checking the count should be atomic
+  public CachingHTableFactory(HTableFactory tableFactory, Configuration conf, 
RegionCoprocessorEnvironment env) {
+  public CachingHTableFactory(HTableFactory factory, int cacheSize, 
RegionCoprocessorEnvironment env) {
+this.pool = ThreadPoolManager.getExecutor(new 
ThreadPoolBuilder("CachedHtables", env.getConfiguration())
+  public HTableInterface getTable(ImmutableBytesPtr tablename, ExecutorService 
pool) throws IOException {
+public HTableInterface getTable(ImmutableBytesPtr 
tablename,ExecutorService pool) throws IOException {
+  public HTableInterface getTable(ImmutableBytesPtr tablename, ExecutorService 
pool) throws IOException;
+   public static final String INDEX_WRITES_THREAD_MAX_PER_REGIONSERVER_KEY = 
"phoenix.index.writes.threads.max";
+public static final String INDEX_WRITER_KEEP_ALIVE_TIME_CONF_KEY = 
"index.writer.threads.keepalivetime";
+void setup(HTableFactory factory, ExecutorService pool, Abortable 
abortable, Stoppable stop, int cacheSize, RegionCoprocessorEnvironment env) {

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/520//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/520//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/520//console

This message is automatically generated.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch, PHOENIX-3159_v3.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415853#comment-15415853
 ] 

Devaraj Das commented on PHOENIX-3159:
--

Thanks for the clarification, [~an...@apache.org]. Yeah should be okay to share 
the threadpool... 

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15415468#comment-15415468
 ] 

Hadoop QA commented on PHOENIX-3159:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12823070/PHOENIX-3159_v2.patch
  against master branch at commit ba82b1cb5a14c2cf109deb8a862389142d92f541.
  ATTACHMENT ID: 12823070

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ synchronized (this) { // the whole operation of closing and 
checking the count should be atomic
+  public HTableInterface getTable(ImmutableBytesPtr tablename, ExecutorService 
pool) throws IOException {
+public HTableInterface getTable(ImmutableBytesPtr 
tablename,ExecutorService pool) throws IOException {
+  public HTableInterface getTable(ImmutableBytesPtr tablename, ExecutorService 
pool) throws IOException;
+public HTableInterface getTable(ImmutableBytesPtr tablename, 
ExecutorService pool) throws IOException {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.phoenix.hbase.index.write.TestCachingHTableFactory

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/508//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/508//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/508//console

This message is automatically generated.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch, 
> PHOENIX-3159_v2.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-09 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15414384#comment-15414384
 ] 

Devaraj Das commented on PHOENIX-3159:
--

Thanks [~an...@apache.org]. Do check if this race is possible:
1. At time t1, the LRU cache evictor thread calls getReferenceCount and that 
returns 0. Context switches to thread-1.
2. At time t1+1, thread-1, does getTable on the same table. But the context 
switches from this thread before it can do incrementReferenceCount.
3. At time t1+2, the LRU cache evictor gets to continue execution, and it goes 
ahead and closes the CachedHTableWrapper instance. Since the getReferenceCount 
is still 0, this would finally invoke the close() on the real table instance.
We end up working with a "closed" htable instance now (wrapped with 
CachedHTableWrapper)... since that is what is going to be returned from 
getTable. If the above can happen, it should be prevented...
Also, remove the reference to workingTables from the patch.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411739#comment-15411739
 ] 

Hadoop QA commented on PHOENIX-3159:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12822549/PHOENIX-3159_v1.patch
  against master branch at commit ba82b1cb5a14c2cf109deb8a862389142d92f541.
  ATTACHMENT ID: 12822549

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ synchronized (this) { // the whole operation of closing and 
checking the count should be atomic
+  private List workingTables = 
Collections.synchronizedList(new ArrayList());

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.ambari.server.controller.internal.FeedResourceProviderTest.testUpdateResources(FeedResourceProviderTest.java:131)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//console

This message is automatically generated.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411672#comment-15411672
 ] 

Ankit Singhal commented on PHOENIX-3159:


Thanks [~devaraj] for review and pointing out the problem.
As you asked earlier, I have implemented the same with reference count as I was 
not using instance in working tables any time. Hopefully, now the attached 
patch handles the consistency properly properly.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411429#comment-15411429
 ] 

Hadoop QA commented on PHOENIX-3159:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12822512/PHOENIX-3159.patch
  against master branch at commit ba82b1cb5a14c2cf109deb8a862389142d92f541.
  ATTACHMENT ID: 12822512

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
34 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+synchronized (workingTables) { //the whole operation of closing 
and removing the entry should be atomic
+  private List workingTables = 
Collections.synchronizedList(new ArrayList());
+//set. The eviction will not really close the underlying table until 
all the instances are cleared

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/499//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/499//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/499//console

This message is automatically generated.

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-08 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411400#comment-15411400
 ] 

Devaraj Das commented on PHOENIX-3159:
--

[~an...@apache.org] thanks. I missed one thing though - the operation of 
adding/removing entries in/from the workingTables should be synchronized as 
well on the workingTables. Otherwise, it might race with the contains() check 
(you could have inserted a 'table' in the workingTables just after the 
workingTables.contains() check - the table will be closed although the table 
should remain open).

> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> 

[jira] [Commented] (PHOENIX-3159) CachingHTableFactory may close HTable during eviction even if it is getting used for writing by another thread.

2016-08-07 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15411345#comment-15411345
 ] 

Ankit Singhal commented on PHOENIX-3159:


Thanks [~devaraj] & [~enis] for the review(done offline).

one comment from [~enis] for documentation:-

{code}
In 1.0, of course the whole Connection / HTable lifecycle is changed. 
Unfortunately, since Phoenix is still bound by 0.98 semantics, changing these 
would be different for Phoenix-HBase-0.98 versus the other branches. One good 
thing is that for HBase-2.0, this way of HTable handling / caching is going 
away. We should make it so that all of these index writers are sharing the same 
connection obtained via the coprocessor and every time the index writers wants 
to use HTable, it instantiates one and closes that Table once it is done 
instead of pooling / caching. Connection.getTable() is very cheap in 1.1+, and 
can be used to instantiate one Table per operation.
{code}


> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread.
> ---
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
> Fix For: 4.8.1
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting 
> used for writing by another thread which results in writing thread to fail 
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and 
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG 
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
>  client.ConnectionManager$HConnectionImplementation: Closing HConnection 
> (debugging purposes only)
> java.lang.Exception
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at 
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at 
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at 
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at 
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which 
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot 
> get replica 0 location for 
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at 
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at 
>