[ 
https://issues.apache.org/jira/browse/HBASE-2382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12899481#action_12899481
 ] 

Jean-Daniel Cryans commented on HBASE-2382:
-------------------------------------------

For the record, the situation I described happened this morning to some new-ish 
users. They ended up with a region server that had 133k HLogs, because they 
forgot to set dfs.replication=1. We need to come up with something friendlier,

> Don't rely on fs.getDefaultReplication() to roll HLogs
> ------------------------------------------------------
>
>                 Key: HBASE-2382
>                 URL: https://issues.apache.org/jira/browse/HBASE-2382
>             Project: HBase
>          Issue Type: Improvement
>            Reporter: Jean-Daniel Cryans
>            Assignee: Nicolas Spiegelberg
>             Fix For: 0.90.0
>
>         Attachments: 2382-v5-TRUNK.patch, HBASE-2382-20.4.patch, 
> HBASE-2382-documentation.patch
>
>
> As I was commenting in HBASE-2234, using fs.getDefaultReplication() to roll 
> HLogs if they lose replicas isn't reliable since that value is client-side 
> and unless HBase is configured with it or has Hadoop's configurations on its 
> classpath, it will do the wrong thing.
> Dhruba added:
> bq. Can we use <hlogpath>.getFiletatus().getReplication() instead of 
> fs.getDefaltReplication()? This will will ensure that we look at the repl 
> factor of the precise file we are interested in, rather than what the 
> system-wide default value is.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to