[jira] [Commented] (HBASE-24) Scaling: Too many open file handles to datanodes

2015-01-07 Thread Cosmin Lehene (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14267759#comment-14267759
 ] 

Cosmin Lehene commented on HBASE-24:


[~saint@gmail.com] this is a venerable one from the oldies but goldies 
series :) as we long ago learnt to up xceivers max open file handles and the 
like, we haven't been affected, however I'm curious if this is something that 
would be worth a refresh.

> Scaling: Too many open file handles to datanodes
> 
>
> Key: HBASE-24
> URL: https://issues.apache.org/jira/browse/HBASE-24
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: stack
>Priority: Blocker
> Attachments: HBASE-823.patch, MonitoredReader.java
>
>
> We've been here before (HADOOP-2341).
> Today the rapleaf gave me an lsof listing from a regionserver.  Had thousands 
> of open sockets to datanodes all in ESTABLISHED and CLOSE_WAIT state.  On 
> average they seem to have about ten file descriptors/sockets open per region 
> (They have 3 column families IIRC.  Per family, can have between 1-5 or so 
> mapfiles open per family -- 3 is max... but compacting we open a new one, 
> etc.).
> They have thousands of regions.   400 regions -- ~100G, which is not that 
> much -- takes about 4k open file handles.
> If they want a regionserver to server a decent disk worths -- 300-400G -- 
> then thats maybe 1600 regions... 16k file handles.  If more than just 3 
> column families. then we are in danger of blowing out limits if they are 
> 32k.
> We've been here before with HADOOP-2341.
> A dfsclient that used non-blocking i/o would help applications like hbase 
> (The datanode doesn't have this problem as bad -- CLOSE_WAIT on regionserver 
> side, the bulk of the open fds in the rapleaf log, don't have a corresponding 
> open resource on datanode end).
> Could also just open mapfiles as needed, but that'd kill our random read 
> performance and its bad enough already.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-24) Scaling: Too many open file handles to datanodes

2011-05-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13039507#comment-13039507
 ] 

Ted Yu commented on HBASE-24:
-

Should we take further action on this JIRA ?

> Scaling: Too many open file handles to datanodes
> 
>
> Key: HBASE-24
> URL: https://issues.apache.org/jira/browse/HBASE-24
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: stack
>Priority: Blocker
> Fix For: 0.92.0
>
> Attachments: HBASE-823.patch, MonitoredReader.java
>
>
> We've been here before (HADOOP-2341).
> Today the rapleaf gave me an lsof listing from a regionserver.  Had thousands 
> of open sockets to datanodes all in ESTABLISHED and CLOSE_WAIT state.  On 
> average they seem to have about ten file descriptors/sockets open per region 
> (They have 3 column families IIRC.  Per family, can have between 1-5 or so 
> mapfiles open per family -- 3 is max... but compacting we open a new one, 
> etc.).
> They have thousands of regions.   400 regions -- ~100G, which is not that 
> much -- takes about 4k open file handles.
> If they want a regionserver to server a decent disk worths -- 300-400G -- 
> then thats maybe 1600 regions... 16k file handles.  If more than just 3 
> column families. then we are in danger of blowing out limits if they are 
> 32k.
> We've been here before with HADOOP-2341.
> A dfsclient that used non-blocking i/o would help applications like hbase 
> (The datanode doesn't have this problem as bad -- CLOSE_WAIT on regionserver 
> side, the bulk of the open fds in the rapleaf log, don't have a corresponding 
> open resource on datanode end).
> Could also just open mapfiles as needed, but that'd kill our random read 
> performance and its bad enough already.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-24) Scaling: Too many open file handles to datanodes

2011-08-22 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-24?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13089042#comment-13089042
 ] 

Alex Newman commented on HBASE-24:
--

+conf); // defer opening streams
is in the patch. Are we actually still defering. That seems wrong.

> Scaling: Too many open file handles to datanodes
> 
>
> Key: HBASE-24
> URL: https://issues.apache.org/jira/browse/HBASE-24
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Reporter: stack
>Priority: Blocker
> Attachments: HBASE-823.patch, MonitoredReader.java
>
>
> We've been here before (HADOOP-2341).
> Today the rapleaf gave me an lsof listing from a regionserver.  Had thousands 
> of open sockets to datanodes all in ESTABLISHED and CLOSE_WAIT state.  On 
> average they seem to have about ten file descriptors/sockets open per region 
> (They have 3 column families IIRC.  Per family, can have between 1-5 or so 
> mapfiles open per family -- 3 is max... but compacting we open a new one, 
> etc.).
> They have thousands of regions.   400 regions -- ~100G, which is not that 
> much -- takes about 4k open file handles.
> If they want a regionserver to server a decent disk worths -- 300-400G -- 
> then thats maybe 1600 regions... 16k file handles.  If more than just 3 
> column families. then we are in danger of blowing out limits if they are 
> 32k.
> We've been here before with HADOOP-2341.
> A dfsclient that used non-blocking i/o would help applications like hbase 
> (The datanode doesn't have this problem as bad -- CLOSE_WAIT on regionserver 
> side, the bulk of the open fds in the rapleaf log, don't have a corresponding 
> open resource on datanode end).
> Could also just open mapfiles as needed, but that'd kill our random read 
> performance and its bad enough already.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira