[
https://issues.apache.org/jira/browse/PHOENIX-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411739#comment-15411739
]
Hadoop QA commented on PHOENIX-3159:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12822549/PHOENIX-3159_v1.patch
against master branch at commit ba82b1cb5a14c2cf109deb8a862389142d92f541.
ATTACHMENT ID: 12822549
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 3 new
or modified tests.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:red}-1 javadoc{color}. The javadoc tool appears to have generated
34 warning messages.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 lineLengths{color}. The patch introduces the following lines
longer than 100:
+ synchronized (this) { // the whole operation of closing and
checking the count should be atomic
+ private List<HTableInterface> workingTables =
Collections.synchronizedList(new ArrayList<HTableInterface>());
{color:green}+1 core tests{color}. The patch passed unit tests in .
{color:red}-1 core zombie tests{color}. There are 1 zombie test(s):
at
org.apache.ambari.server.controller.internal.FeedResourceProviderTest.testUpdateResources(FeedResourceProviderTest.java:131)
Test results:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//testReport/
Javadoc warnings:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//artifact/patchprocess/patchJavadocWarnings.txt
Console output:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/500//console
This message is automatically generated.
> CachingHTableFactory may close HTable during eviction even if it is getting
> used for writing by another thread.
> ---------------------------------------------------------------------------------------------------------------
>
> Key: PHOENIX-3159
> URL: https://issues.apache.org/jira/browse/PHOENIX-3159
> Project: Phoenix
> Issue Type: Bug
> Reporter: Ankit Singhal
> Assignee: Ankit Singhal
> Fix For: 4.8.1
>
> Attachments: PHOENIX-3159.patch, PHOENIX-3159_v1.patch
>
>
> CachingHTableFactory may close HTable during eviction even if it is getting
> used for writing by another thread which results in writing thread to fail
> and index is disabled.
> LRU eviction closing HTable or underlying connection when cache is full and
> new HTable is requested.
> {code}
> 2016-08-04 13:45:21,109 DEBUG
> [nat-s11-4-ioss-phoenix-1-5.openstacklocal,16020,1470297472814-index-writer--pool11-t35]
> client.ConnectionManager$HConnectionImplementation: Closing HConnection
> (debugging purposes only)
> java.lang.Exception
> at
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.internalClose(ConnectionManager.java:2423)
> at
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.close(ConnectionManager.java:2447)
> at
> org.apache.hadoop.hbase.client.CoprocessorHConnection.close(CoprocessorHConnection.java:41)
> at
> org.apache.hadoop.hbase.client.HTableWrapper.internalClose(HTableWrapper.java:91)
> at
> org.apache.hadoop.hbase.client.HTableWrapper.close(HTableWrapper.java:107)
> at
> org.apache.phoenix.hbase.index.table.CachingHTableFactory$HTableInterfaceLRUMap.removeLRU(CachingHTableFactory.java:61)
> at
> org.apache.commons.collections.map.LRUMap.addMapping(LRUMap.java:256)
> at
> org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> at
> org.apache.phoenix.hbase.index.table.CachingHTableFactory.getTable(CachingHTableFactory.java:100)
> at
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:160)
> at
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:136)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> But the IndexWriter was using this old connection to write to the table which
> was closed during LRU eviction
> {code}
> 016-08-04 13:44:59,553 ERROR [htable-pool659-t1] client.AsyncProcess: Cannot
> get replica 0 location for
> {"totalColumns":1,"row":"\\xC7\\x03\\x04\\x06X\\x1C)\\x00\\x80\\x07\\xB0X","families":{"0":[{"qualifier":"_0","vlen":2,"tag":[],"timestamp":1470318296425}]}}
> java.io.IOException: hconnection-0x21f468be closed
> at
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1153)
> at
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)
> at
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.findAllLocationsOrFail(AsyncProcess.java:949)
> at
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.groupAndSendMultiAction(AsyncProcess.java:866)
> at
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.resubmit(AsyncProcess.java:1195)
> at
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.receiveGlobalFailure(AsyncProcess.java:1162)
> at
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.access$1100(AsyncProcess.java:584)
> at
> org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:727)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Although the workaround is to the cache size(index.tablefactory.cache.size).
> But still we should handle the closing of working HTables to avoid index
> write failures (which in turn disables index).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)