[ https://issues.apache.org/jira/browse/HBASE-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13868610#comment-13868610 ]
Andrew Purtell commented on HBASE-10319: ---------------------------------------- bq. Since no new data is ever written, the existing periodic check is not activated. Then this has been a long standing bug, since it confounds expectations set up by hbase-default.xml: {noformat} <property> <name>hbase.regionserver.logroll.period</name> <value>3600000</value> <description>Period at which we will roll the commit log regardless of how many edits it has.</description> </property> {noformat} > HLog should roll periodically to allow DN decommission to eventually complete. > ------------------------------------------------------------------------------ > > Key: HBASE-10319 > URL: https://issues.apache.org/jira/browse/HBASE-10319 > Project: HBase > Issue Type: Bug > Reporter: Jonathan Hsieh > > We encountered a situation where we had an esseitially read only table and > attempted to do a clean HDFS DN decommission. DN's cannot decomission if > there are open blocks being written to currently on it. Because the hbase > Hlog file was open, had some data (hlog header), the DN could not > decommission itself. Since no new data is ever written, the existing > periodic check is not activated. > After discussing with [~atm], it seems that although an hdfs semantics change > would be ideal (e.g. hbase doesn't have to be aware of hdfs decommission and > the client would roll over) this would take much more effort than having > hbase periodically force a log roll. This would enable the hdfs dn con > complete. -- This message was sent by Atlassian JIRA (v6.1.5#6160)