[ https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15966020#comment-15966020 ]
Albert Lee edited comment on HBASE-17906 at 4/12/17 3:08 PM: ------------------------------------------------------------- I find because the htablePools cache and connection cache have the same timeout. I add a refresh mechanism and it works for me now. was (Author: albertlee...@gmail.com): I find because the htablePools cache and connection cache have the same timeout. I add a refresh mechanism and it works for me now. > When a huge amount of data writing to hbase through thrift2, there will be a > deadlock error. > -------------------------------------------------------------------------------------------- > > Key: HBASE-17906 > URL: https://issues.apache.org/jira/browse/HBASE-17906 > Project: HBase > Issue Type: Bug > Components: Client > Affects Versions: 0.98.21 > Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77 > Reporter: Albert Lee > Fix For: 1.2.2, 0.98.21 > > > When a huge amount of data writing to hbase through thrift2, there will be a > deadlock error. -- This message was sent by Atlassian JIRA (v6.3.15#6346)