[jira] [Comment Edited] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-18 Thread Albert Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15972391#comment-15972391
 ] 

Albert Lee edited comment on HBASE-17906 at 4/18/17 3:56 PM:
-

hbase-thrift-17906-ForRecurr.zip is made a little change of 
hbase-examples/src/main/python/thrift2 for recurring the problem. 
Use the 'DemoClient.py' which in this zip and write data through thrift2 
server. When the python run more than 'hbase.thrift.connection.max-idletime', 
all connection to server will be interrupted, then thrift2 server did not work 
for a while and throw "hbase.ttypes.TIOError: TIOError(_message='Failed 1 
action: IOException: 1 time, servers with issues: null location" to 
DemoClient.py.

The problem is because when get table from HTablePools in 
ThriftHBaseServiceHandler.java, it never refresh the connection which in  
connectionCache. Until 'hbase.thrift.connection.max-idletime' come, 
connectionCache begin to clean connections which not refreshed. Then the 
problem raise.

Problems appeared in the version 0.98. I will test this problem whether exists 
in branch 1+ or master.

And sorry about the issue title. It is not a deadlock, it just because 
connection does not refresh when getTable. I added patch at the beginning of 
this year, mixed is up with another thing.


was (Author: albertlee...@gmail.com):
hbase-thrift-17906-ForRecurr.zip is made a little change of 
hbase-examples/src/main/python/thrift2 for recurring the problem. 
Use the 'DemoClient.py' which in this zip and write data through thrift2 
server. When the python run more than 'hbase.thrift.connection.max-idletime', 
all connection to server will be interrupted, then thrift2 server did not work 
for a while and throw "hbase.ttypes.TIOError: TIOError(_message='Failed 1 
action: IOException: 1 time, servers with issues: null location" to 
DemoClient.py.

The problem is because when get table from HTablePools in 
ThriftHBaseServiceHandler.java, it never refresh the connection which in  
connectionCache. Until 'hbase.thrift.connection.max-idletime' come, 
connectionCache begin to clean connections which not refreshed. Then the 
problem raise.

Problems appeared in the version 0.98. I will test this problem whether exists 
in branch 1+ or master.

And sorry about the issue title. It is not a deadlock, it just because 
connection does refresh when getTable.

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.branch-0.98.001.patch, 
> HBASE-17906.branch-0.98.002.patch, HBASE-17906.master.001.patch, 
> HBASE-17906.master.002.patch, hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-13 Thread Albert Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15967506#comment-15967506
 ] 

Albert Lee edited comment on HBASE-17906 at 4/13/17 12:18 PM:
--

[~yuzhih...@gmail.com]   I have already delete ThriftServer.java line 388 with 
patch 002 which findbugs notices, but why hadoop QA still notices me this warn.


was (Author: albertlee...@gmail.com):
[~yuzhih...@gmail.com]]   I have already delete ThriftServer.java line 388 with 
patch 002 which findbugs notices, but why hadoop QA still notices me this warn.

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.branch-0.98.001.patch, 
> HBASE-17906.branch-0.98.002.patch
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-12 Thread Albert Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15966020#comment-15966020
 ] 

Albert Lee edited comment on HBASE-17906 at 4/12/17 3:11 PM:
-

I find because the htablePools cache and connection cache have the same timeout.
I add a refresh mechanism and it works for me now.

Here is my patch
https://patch-diff.githubusercontent.com/raw/apache/hbase/pull/48.patch


was (Author: albertlee...@gmail.com):
I find because the htablePools cache and connection cache have the same timeout.
I add a refresh mechanism and it works for me now.

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
> Fix For: 1.2.2, 0.98.21
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-12 Thread Albert Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15966020#comment-15966020
 ] 

Albert Lee edited comment on HBASE-17906 at 4/12/17 3:08 PM:
-

I find because the htablePools cache and connection cache have the same timeout.
I add a refresh mechanism and it works for me now.


was (Author: albertlee...@gmail.com):
I find because the htablePools cache and connection cache have the same timeout.
I add a refresh mechanism and it works for me now.

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
> Fix For: 1.2.2, 0.98.21
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)